Test Report: KVM_Linux_crio 20720

                    
                      b7440dc9e9eb90138d871b2ff610c46584e06ed3:2025-05-10:39516
                    
                

Test fail (21/321)

x
+
TestAddons/parallel/Ingress (151.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-573653 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-573653 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-573653 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [99d0701e-2474-4490-adc0-d9078c08bee4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [99d0701e-2474-4490-adc0-d9078c08bee4] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 7.004283967s
I0510 17:55:33.083809  395980 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-573653 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-573653 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.775966074s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-573653 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-573653 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.219
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-573653 -n addons-573653
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-573653 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-573653 logs -n 25: (1.569797822s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-783034                                                                     | download-only-783034 | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	| delete  | -p download-only-820244                                                                     | download-only-820244 | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	| delete  | -p download-only-783034                                                                     | download-only-783034 | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-848338 | jenkins | v1.35.0 | 10 May 25 17:52 UTC |                     |
	|         | binary-mirror-848338                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:38303                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-848338                                                                     | binary-mirror-848338 | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	| addons  | disable dashboard -p                                                                        | addons-573653        | jenkins | v1.35.0 | 10 May 25 17:52 UTC |                     |
	|         | addons-573653                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-573653        | jenkins | v1.35.0 | 10 May 25 17:52 UTC |                     |
	|         | addons-573653                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-573653 --wait=true                                                                | addons-573653        | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:54 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-573653 addons disable                                                                | addons-573653        | jenkins | v1.35.0 | 10 May 25 17:54 UTC | 10 May 25 17:54 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-573653 addons disable                                                                | addons-573653        | jenkins | v1.35.0 | 10 May 25 17:54 UTC | 10 May 25 17:54 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-573653        | jenkins | v1.35.0 | 10 May 25 17:54 UTC | 10 May 25 17:54 UTC |
	|         | -p addons-573653                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-573653 addons                                                                        | addons-573653        | jenkins | v1.35.0 | 10 May 25 17:55 UTC | 10 May 25 17:55 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-573653 addons                                                                        | addons-573653        | jenkins | v1.35.0 | 10 May 25 17:55 UTC | 10 May 25 17:55 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-573653 addons                                                                        | addons-573653        | jenkins | v1.35.0 | 10 May 25 17:55 UTC | 10 May 25 17:55 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-573653 addons disable                                                                | addons-573653        | jenkins | v1.35.0 | 10 May 25 17:55 UTC | 10 May 25 17:55 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-573653 ip                                                                            | addons-573653        | jenkins | v1.35.0 | 10 May 25 17:55 UTC | 10 May 25 17:55 UTC |
	| addons  | addons-573653 addons disable                                                                | addons-573653        | jenkins | v1.35.0 | 10 May 25 17:55 UTC | 10 May 25 17:55 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-573653 addons                                                                        | addons-573653        | jenkins | v1.35.0 | 10 May 25 17:55 UTC | 10 May 25 17:55 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-573653 addons disable                                                                | addons-573653        | jenkins | v1.35.0 | 10 May 25 17:55 UTC | 10 May 25 17:55 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-573653 ssh cat                                                                       | addons-573653        | jenkins | v1.35.0 | 10 May 25 17:55 UTC | 10 May 25 17:55 UTC |
	|         | /opt/local-path-provisioner/pvc-615a4158-a857-4e10-b582-9688b4023855_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-573653 addons disable                                                                | addons-573653        | jenkins | v1.35.0 | 10 May 25 17:55 UTC | 10 May 25 17:56 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-573653 ssh curl -s                                                                   | addons-573653        | jenkins | v1.35.0 | 10 May 25 17:55 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-573653 addons                                                                        | addons-573653        | jenkins | v1.35.0 | 10 May 25 17:55 UTC | 10 May 25 17:55 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-573653 addons                                                                        | addons-573653        | jenkins | v1.35.0 | 10 May 25 17:55 UTC | 10 May 25 17:55 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-573653 ip                                                                            | addons-573653        | jenkins | v1.35.0 | 10 May 25 17:57 UTC | 10 May 25 17:57 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/05/10 17:52:19
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0510 17:52:19.641650  396583 out.go:345] Setting OutFile to fd 1 ...
	I0510 17:52:19.641914  396583 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:52:19.641924  396583 out.go:358] Setting ErrFile to fd 2...
	I0510 17:52:19.641928  396583 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:52:19.642120  396583 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-388787/.minikube/bin
	I0510 17:52:19.642764  396583 out.go:352] Setting JSON to false
	I0510 17:52:19.643710  396583 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27288,"bootTime":1746872252,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 17:52:19.643805  396583 start.go:140] virtualization: kvm guest
	I0510 17:52:19.645878  396583 out.go:177] * [addons-573653] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 17:52:19.647197  396583 notify.go:220] Checking for updates...
	I0510 17:52:19.647223  396583 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 17:52:19.648856  396583 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 17:52:19.650148  396583 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-388787/kubeconfig
	I0510 17:52:19.651395  396583 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-388787/.minikube
	I0510 17:52:19.652518  396583 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 17:52:19.653736  396583 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 17:52:19.655140  396583 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 17:52:19.688658  396583 out.go:177] * Using the kvm2 driver based on user configuration
	I0510 17:52:19.690060  396583 start.go:304] selected driver: kvm2
	I0510 17:52:19.690081  396583 start.go:908] validating driver "kvm2" against <nil>
	I0510 17:52:19.690095  396583 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 17:52:19.690917  396583 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 17:52:19.691038  396583 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20720-388787/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0510 17:52:19.707555  396583 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0510 17:52:19.707618  396583 start_flags.go:311] no existing cluster config was found, will generate one from the flags 
	I0510 17:52:19.707909  396583 start_flags.go:975] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0510 17:52:19.707953  396583 cni.go:84] Creating CNI manager for ""
	I0510 17:52:19.707996  396583 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0510 17:52:19.708012  396583 start_flags.go:320] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0510 17:52:19.708067  396583 start.go:347] cluster config:
	{Name:addons-573653 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:addons-573653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:52:19.708164  396583 iso.go:125] acquiring lock: {Name:mk19640015999219180c6685480547adf0c02201 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 17:52:19.710168  396583 out.go:177] * Starting "addons-573653" primary control-plane node in "addons-573653" cluster
	I0510 17:52:19.711528  396583 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
	I0510 17:52:19.711596  396583 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4
	I0510 17:52:19.711609  396583 cache.go:56] Caching tarball of preloaded images
	I0510 17:52:19.711725  396583 preload.go:172] Found /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0510 17:52:19.711736  396583 cache.go:59] Finished verifying existence of preloaded tar for v1.33.0 on crio
	I0510 17:52:19.712073  396583 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/config.json ...
	I0510 17:52:19.712104  396583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/config.json: {Name:mk9a2662e99c30387a8bdcdb325232dcb9d463f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:52:19.712254  396583 start.go:360] acquireMachinesLock for addons-573653: {Name:mk11499d7756d503a7a24339ad1a7f9ab9dc0fab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0510 17:52:19.712303  396583 start.go:364] duration metric: took 34.593µs to acquireMachinesLock for "addons-573653"
	I0510 17:52:19.712318  396583 start.go:93] Provisioning new machine with config: &{Name:addons-573653 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.33.0 ClusterName:addons-573653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0510 17:52:19.712374  396583 start.go:125] createHost starting for "" (driver="kvm2")
	I0510 17:52:19.714178  396583 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0510 17:52:19.714391  396583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 17:52:19.714443  396583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:52:19.729288  396583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42373
	I0510 17:52:19.729884  396583 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:52:19.730507  396583 main.go:141] libmachine: Using API Version  1
	I0510 17:52:19.730525  396583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:52:19.730975  396583 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:52:19.731177  396583 main.go:141] libmachine: (addons-573653) Calling .GetMachineName
	I0510 17:52:19.731350  396583 main.go:141] libmachine: (addons-573653) Calling .DriverName
	I0510 17:52:19.731510  396583 start.go:159] libmachine.API.Create for "addons-573653" (driver="kvm2")
	I0510 17:52:19.731538  396583 client.go:168] LocalClient.Create starting
	I0510 17:52:19.731582  396583 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem
	I0510 17:52:20.067528  396583 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem
	I0510 17:52:20.232990  396583 main.go:141] libmachine: Running pre-create checks...
	I0510 17:52:20.233016  396583 main.go:141] libmachine: (addons-573653) Calling .PreCreateCheck
	I0510 17:52:20.233548  396583 main.go:141] libmachine: (addons-573653) Calling .GetConfigRaw
	I0510 17:52:20.234073  396583 main.go:141] libmachine: Creating machine...
	I0510 17:52:20.234090  396583 main.go:141] libmachine: (addons-573653) Calling .Create
	I0510 17:52:20.234279  396583 main.go:141] libmachine: (addons-573653) creating KVM machine...
	I0510 17:52:20.234301  396583 main.go:141] libmachine: (addons-573653) creating network...
	I0510 17:52:20.235840  396583 main.go:141] libmachine: (addons-573653) DBG | found existing default KVM network
	I0510 17:52:20.236395  396583 main.go:141] libmachine: (addons-573653) DBG | I0510 17:52:20.236235  396605 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000136c0}
	I0510 17:52:20.236421  396583 main.go:141] libmachine: (addons-573653) DBG | created network xml: 
	I0510 17:52:20.236436  396583 main.go:141] libmachine: (addons-573653) DBG | <network>
	I0510 17:52:20.236454  396583 main.go:141] libmachine: (addons-573653) DBG |   <name>mk-addons-573653</name>
	I0510 17:52:20.236493  396583 main.go:141] libmachine: (addons-573653) DBG |   <dns enable='no'/>
	I0510 17:52:20.236513  396583 main.go:141] libmachine: (addons-573653) DBG |   
	I0510 17:52:20.236531  396583 main.go:141] libmachine: (addons-573653) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0510 17:52:20.236543  396583 main.go:141] libmachine: (addons-573653) DBG |     <dhcp>
	I0510 17:52:20.236549  396583 main.go:141] libmachine: (addons-573653) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0510 17:52:20.236561  396583 main.go:141] libmachine: (addons-573653) DBG |     </dhcp>
	I0510 17:52:20.236572  396583 main.go:141] libmachine: (addons-573653) DBG |   </ip>
	I0510 17:52:20.236587  396583 main.go:141] libmachine: (addons-573653) DBG |   
	I0510 17:52:20.236598  396583 main.go:141] libmachine: (addons-573653) DBG | </network>
	I0510 17:52:20.236619  396583 main.go:141] libmachine: (addons-573653) DBG | 
	I0510 17:52:20.242285  396583 main.go:141] libmachine: (addons-573653) DBG | trying to create private KVM network mk-addons-573653 192.168.39.0/24...
	I0510 17:52:20.315643  396583 main.go:141] libmachine: (addons-573653) setting up store path in /home/jenkins/minikube-integration/20720-388787/.minikube/machines/addons-573653 ...
	I0510 17:52:20.315688  396583 main.go:141] libmachine: (addons-573653) building disk image from file:///home/jenkins/minikube-integration/20720-388787/.minikube/cache/iso/amd64/minikube-v1.35.0-1746739450-20720-amd64.iso
	I0510 17:52:20.315701  396583 main.go:141] libmachine: (addons-573653) DBG | private KVM network mk-addons-573653 192.168.39.0/24 created
	I0510 17:52:20.315720  396583 main.go:141] libmachine: (addons-573653) DBG | I0510 17:52:20.315541  396605 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20720-388787/.minikube
	I0510 17:52:20.315982  396583 main.go:141] libmachine: (addons-573653) Downloading /home/jenkins/minikube-integration/20720-388787/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20720-388787/.minikube/cache/iso/amd64/minikube-v1.35.0-1746739450-20720-amd64.iso...
	I0510 17:52:20.587156  396583 main.go:141] libmachine: (addons-573653) DBG | I0510 17:52:20.586981  396605 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/addons-573653/id_rsa...
	I0510 17:52:20.691473  396583 main.go:141] libmachine: (addons-573653) DBG | I0510 17:52:20.691264  396605 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/addons-573653/addons-573653.rawdisk...
	I0510 17:52:20.691513  396583 main.go:141] libmachine: (addons-573653) DBG | Writing magic tar header
	I0510 17:52:20.691528  396583 main.go:141] libmachine: (addons-573653) DBG | Writing SSH key tar header
	I0510 17:52:20.691540  396583 main.go:141] libmachine: (addons-573653) DBG | I0510 17:52:20.691405  396605 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20720-388787/.minikube/machines/addons-573653 ...
	I0510 17:52:20.691554  396583 main.go:141] libmachine: (addons-573653) setting executable bit set on /home/jenkins/minikube-integration/20720-388787/.minikube/machines/addons-573653 (perms=drwx------)
	I0510 17:52:20.691655  396583 main.go:141] libmachine: (addons-573653) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/addons-573653
	I0510 17:52:20.691687  396583 main.go:141] libmachine: (addons-573653) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20720-388787/.minikube/machines
	I0510 17:52:20.691703  396583 main.go:141] libmachine: (addons-573653) setting executable bit set on /home/jenkins/minikube-integration/20720-388787/.minikube/machines (perms=drwxr-xr-x)
	I0510 17:52:20.691744  396583 main.go:141] libmachine: (addons-573653) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20720-388787/.minikube
	I0510 17:52:20.691777  396583 main.go:141] libmachine: (addons-573653) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20720-388787
	I0510 17:52:20.691791  396583 main.go:141] libmachine: (addons-573653) setting executable bit set on /home/jenkins/minikube-integration/20720-388787/.minikube (perms=drwxr-xr-x)
	I0510 17:52:20.691812  396583 main.go:141] libmachine: (addons-573653) setting executable bit set on /home/jenkins/minikube-integration/20720-388787 (perms=drwxrwxr-x)
	I0510 17:52:20.691827  396583 main.go:141] libmachine: (addons-573653) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0510 17:52:20.691854  396583 main.go:141] libmachine: (addons-573653) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0510 17:52:20.691868  396583 main.go:141] libmachine: (addons-573653) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0510 17:52:20.691880  396583 main.go:141] libmachine: (addons-573653) creating domain...
	I0510 17:52:20.691896  396583 main.go:141] libmachine: (addons-573653) DBG | checking permissions on dir: /home/jenkins
	I0510 17:52:20.691908  396583 main.go:141] libmachine: (addons-573653) DBG | checking permissions on dir: /home
	I0510 17:52:20.691927  396583 main.go:141] libmachine: (addons-573653) DBG | skipping /home - not owner
	I0510 17:52:20.693247  396583 main.go:141] libmachine: (addons-573653) define libvirt domain using xml: 
	I0510 17:52:20.693268  396583 main.go:141] libmachine: (addons-573653) <domain type='kvm'>
	I0510 17:52:20.693275  396583 main.go:141] libmachine: (addons-573653)   <name>addons-573653</name>
	I0510 17:52:20.693284  396583 main.go:141] libmachine: (addons-573653)   <memory unit='MiB'>4000</memory>
	I0510 17:52:20.693289  396583 main.go:141] libmachine: (addons-573653)   <vcpu>2</vcpu>
	I0510 17:52:20.693293  396583 main.go:141] libmachine: (addons-573653)   <features>
	I0510 17:52:20.693297  396583 main.go:141] libmachine: (addons-573653)     <acpi/>
	I0510 17:52:20.693301  396583 main.go:141] libmachine: (addons-573653)     <apic/>
	I0510 17:52:20.693305  396583 main.go:141] libmachine: (addons-573653)     <pae/>
	I0510 17:52:20.693309  396583 main.go:141] libmachine: (addons-573653)     
	I0510 17:52:20.693314  396583 main.go:141] libmachine: (addons-573653)   </features>
	I0510 17:52:20.693318  396583 main.go:141] libmachine: (addons-573653)   <cpu mode='host-passthrough'>
	I0510 17:52:20.693334  396583 main.go:141] libmachine: (addons-573653)   
	I0510 17:52:20.693340  396583 main.go:141] libmachine: (addons-573653)   </cpu>
	I0510 17:52:20.693345  396583 main.go:141] libmachine: (addons-573653)   <os>
	I0510 17:52:20.693349  396583 main.go:141] libmachine: (addons-573653)     <type>hvm</type>
	I0510 17:52:20.693354  396583 main.go:141] libmachine: (addons-573653)     <boot dev='cdrom'/>
	I0510 17:52:20.693361  396583 main.go:141] libmachine: (addons-573653)     <boot dev='hd'/>
	I0510 17:52:20.693366  396583 main.go:141] libmachine: (addons-573653)     <bootmenu enable='no'/>
	I0510 17:52:20.693374  396583 main.go:141] libmachine: (addons-573653)   </os>
	I0510 17:52:20.693378  396583 main.go:141] libmachine: (addons-573653)   <devices>
	I0510 17:52:20.693383  396583 main.go:141] libmachine: (addons-573653)     <disk type='file' device='cdrom'>
	I0510 17:52:20.693391  396583 main.go:141] libmachine: (addons-573653)       <source file='/home/jenkins/minikube-integration/20720-388787/.minikube/machines/addons-573653/boot2docker.iso'/>
	I0510 17:52:20.693407  396583 main.go:141] libmachine: (addons-573653)       <target dev='hdc' bus='scsi'/>
	I0510 17:52:20.693414  396583 main.go:141] libmachine: (addons-573653)       <readonly/>
	I0510 17:52:20.693418  396583 main.go:141] libmachine: (addons-573653)     </disk>
	I0510 17:52:20.693426  396583 main.go:141] libmachine: (addons-573653)     <disk type='file' device='disk'>
	I0510 17:52:20.693436  396583 main.go:141] libmachine: (addons-573653)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0510 17:52:20.693445  396583 main.go:141] libmachine: (addons-573653)       <source file='/home/jenkins/minikube-integration/20720-388787/.minikube/machines/addons-573653/addons-573653.rawdisk'/>
	I0510 17:52:20.693450  396583 main.go:141] libmachine: (addons-573653)       <target dev='hda' bus='virtio'/>
	I0510 17:52:20.693491  396583 main.go:141] libmachine: (addons-573653)     </disk>
	I0510 17:52:20.693515  396583 main.go:141] libmachine: (addons-573653)     <interface type='network'>
	I0510 17:52:20.693525  396583 main.go:141] libmachine: (addons-573653)       <source network='mk-addons-573653'/>
	I0510 17:52:20.693536  396583 main.go:141] libmachine: (addons-573653)       <model type='virtio'/>
	I0510 17:52:20.693548  396583 main.go:141] libmachine: (addons-573653)     </interface>
	I0510 17:52:20.693557  396583 main.go:141] libmachine: (addons-573653)     <interface type='network'>
	I0510 17:52:20.693569  396583 main.go:141] libmachine: (addons-573653)       <source network='default'/>
	I0510 17:52:20.693581  396583 main.go:141] libmachine: (addons-573653)       <model type='virtio'/>
	I0510 17:52:20.693593  396583 main.go:141] libmachine: (addons-573653)     </interface>
	I0510 17:52:20.693602  396583 main.go:141] libmachine: (addons-573653)     <serial type='pty'>
	I0510 17:52:20.693628  396583 main.go:141] libmachine: (addons-573653)       <target port='0'/>
	I0510 17:52:20.693651  396583 main.go:141] libmachine: (addons-573653)     </serial>
	I0510 17:52:20.693660  396583 main.go:141] libmachine: (addons-573653)     <console type='pty'>
	I0510 17:52:20.693681  396583 main.go:141] libmachine: (addons-573653)       <target type='serial' port='0'/>
	I0510 17:52:20.693688  396583 main.go:141] libmachine: (addons-573653)     </console>
	I0510 17:52:20.693695  396583 main.go:141] libmachine: (addons-573653)     <rng model='virtio'>
	I0510 17:52:20.693711  396583 main.go:141] libmachine: (addons-573653)       <backend model='random'>/dev/random</backend>
	I0510 17:52:20.693718  396583 main.go:141] libmachine: (addons-573653)     </rng>
	I0510 17:52:20.693730  396583 main.go:141] libmachine: (addons-573653)     
	I0510 17:52:20.693749  396583 main.go:141] libmachine: (addons-573653)     
	I0510 17:52:20.693759  396583 main.go:141] libmachine: (addons-573653)   </devices>
	I0510 17:52:20.693763  396583 main.go:141] libmachine: (addons-573653) </domain>
	I0510 17:52:20.693770  396583 main.go:141] libmachine: (addons-573653) 
	I0510 17:52:20.698490  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:c6:12:08 in network default
	I0510 17:52:20.699175  396583 main.go:141] libmachine: (addons-573653) starting domain...
	I0510 17:52:20.699199  396583 main.go:141] libmachine: (addons-573653) ensuring networks are active...
	I0510 17:52:20.699211  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:20.699901  396583 main.go:141] libmachine: (addons-573653) Ensuring network default is active
	I0510 17:52:20.700212  396583 main.go:141] libmachine: (addons-573653) Ensuring network mk-addons-573653 is active
	I0510 17:52:20.700670  396583 main.go:141] libmachine: (addons-573653) getting domain XML...
	I0510 17:52:20.701380  396583 main.go:141] libmachine: (addons-573653) creating domain...
	I0510 17:52:21.932441  396583 main.go:141] libmachine: (addons-573653) waiting for IP...
	I0510 17:52:21.933286  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:21.933740  396583 main.go:141] libmachine: (addons-573653) DBG | unable to find current IP address of domain addons-573653 in network mk-addons-573653
	I0510 17:52:21.933793  396583 main.go:141] libmachine: (addons-573653) DBG | I0510 17:52:21.933742  396605 retry.go:31] will retry after 254.884499ms: waiting for domain to come up
	I0510 17:52:22.190715  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:22.191362  396583 main.go:141] libmachine: (addons-573653) DBG | unable to find current IP address of domain addons-573653 in network mk-addons-573653
	I0510 17:52:22.191397  396583 main.go:141] libmachine: (addons-573653) DBG | I0510 17:52:22.191332  396605 retry.go:31] will retry after 359.74122ms: waiting for domain to come up
	I0510 17:52:22.553170  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:22.553642  396583 main.go:141] libmachine: (addons-573653) DBG | unable to find current IP address of domain addons-573653 in network mk-addons-573653
	I0510 17:52:22.553699  396583 main.go:141] libmachine: (addons-573653) DBG | I0510 17:52:22.553631  396605 retry.go:31] will retry after 297.33897ms: waiting for domain to come up
	I0510 17:52:22.852408  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:22.852952  396583 main.go:141] libmachine: (addons-573653) DBG | unable to find current IP address of domain addons-573653 in network mk-addons-573653
	I0510 17:52:22.852982  396583 main.go:141] libmachine: (addons-573653) DBG | I0510 17:52:22.852888  396605 retry.go:31] will retry after 605.917917ms: waiting for domain to come up
	I0510 17:52:23.460710  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:23.461029  396583 main.go:141] libmachine: (addons-573653) DBG | unable to find current IP address of domain addons-573653 in network mk-addons-573653
	I0510 17:52:23.461054  396583 main.go:141] libmachine: (addons-573653) DBG | I0510 17:52:23.460988  396605 retry.go:31] will retry after 725.293768ms: waiting for domain to come up
	I0510 17:52:24.188122  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:24.188683  396583 main.go:141] libmachine: (addons-573653) DBG | unable to find current IP address of domain addons-573653 in network mk-addons-573653
	I0510 17:52:24.188701  396583 main.go:141] libmachine: (addons-573653) DBG | I0510 17:52:24.188635  396605 retry.go:31] will retry after 939.614432ms: waiting for domain to come up
	I0510 17:52:25.129592  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:25.129946  396583 main.go:141] libmachine: (addons-573653) DBG | unable to find current IP address of domain addons-573653 in network mk-addons-573653
	I0510 17:52:25.129986  396583 main.go:141] libmachine: (addons-573653) DBG | I0510 17:52:25.129924  396605 retry.go:31] will retry after 1.169695483s: waiting for domain to come up
	I0510 17:52:26.301473  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:26.302000  396583 main.go:141] libmachine: (addons-573653) DBG | unable to find current IP address of domain addons-573653 in network mk-addons-573653
	I0510 17:52:26.302034  396583 main.go:141] libmachine: (addons-573653) DBG | I0510 17:52:26.301936  396605 retry.go:31] will retry after 954.857915ms: waiting for domain to come up
	I0510 17:52:27.258200  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:27.258538  396583 main.go:141] libmachine: (addons-573653) DBG | unable to find current IP address of domain addons-573653 in network mk-addons-573653
	I0510 17:52:27.258569  396583 main.go:141] libmachine: (addons-573653) DBG | I0510 17:52:27.258526  396605 retry.go:31] will retry after 1.581036069s: waiting for domain to come up
	I0510 17:52:28.842289  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:28.842657  396583 main.go:141] libmachine: (addons-573653) DBG | unable to find current IP address of domain addons-573653 in network mk-addons-573653
	I0510 17:52:28.842714  396583 main.go:141] libmachine: (addons-573653) DBG | I0510 17:52:28.842667  396605 retry.go:31] will retry after 1.736851642s: waiting for domain to come up
	I0510 17:52:30.581354  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:30.581858  396583 main.go:141] libmachine: (addons-573653) DBG | unable to find current IP address of domain addons-573653 in network mk-addons-573653
	I0510 17:52:30.581921  396583 main.go:141] libmachine: (addons-573653) DBG | I0510 17:52:30.581843  396605 retry.go:31] will retry after 2.389844403s: waiting for domain to come up
	I0510 17:52:32.973490  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:32.973954  396583 main.go:141] libmachine: (addons-573653) DBG | unable to find current IP address of domain addons-573653 in network mk-addons-573653
	I0510 17:52:32.973982  396583 main.go:141] libmachine: (addons-573653) DBG | I0510 17:52:32.973924  396605 retry.go:31] will retry after 3.331473606s: waiting for domain to come up
	I0510 17:52:36.308117  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:36.308492  396583 main.go:141] libmachine: (addons-573653) DBG | unable to find current IP address of domain addons-573653 in network mk-addons-573653
	I0510 17:52:36.308556  396583 main.go:141] libmachine: (addons-573653) DBG | I0510 17:52:36.308469  396605 retry.go:31] will retry after 2.854726416s: waiting for domain to come up
	I0510 17:52:39.164383  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:39.164814  396583 main.go:141] libmachine: (addons-573653) DBG | unable to find current IP address of domain addons-573653 in network mk-addons-573653
	I0510 17:52:39.164842  396583 main.go:141] libmachine: (addons-573653) DBG | I0510 17:52:39.164764  396605 retry.go:31] will retry after 5.421264588s: waiting for domain to come up
	I0510 17:52:44.591355  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:44.591893  396583 main.go:141] libmachine: (addons-573653) found domain IP: 192.168.39.219
	I0510 17:52:44.591923  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has current primary IP address 192.168.39.219 and MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:44.591930  396583 main.go:141] libmachine: (addons-573653) reserving static IP address...
	I0510 17:52:44.592358  396583 main.go:141] libmachine: (addons-573653) DBG | unable to find host DHCP lease matching {name: "addons-573653", mac: "52:54:00:68:f2:75", ip: "192.168.39.219"} in network mk-addons-573653
	I0510 17:52:44.677846  396583 main.go:141] libmachine: (addons-573653) DBG | Getting to WaitForSSH function...
	I0510 17:52:44.677877  396583 main.go:141] libmachine: (addons-573653) reserved static IP address 192.168.39.219 for domain addons-573653
	I0510 17:52:44.677896  396583 main.go:141] libmachine: (addons-573653) waiting for SSH...
	I0510 17:52:44.680986  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:44.681515  396583 main.go:141] libmachine: (addons-573653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:f2:75", ip: ""} in network mk-addons-573653: {Iface:virbr1 ExpiryTime:2025-05-10 18:52:35 +0000 UTC Type:0 Mac:52:54:00:68:f2:75 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:minikube Clientid:01:52:54:00:68:f2:75}
	I0510 17:52:44.681560  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined IP address 192.168.39.219 and MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:44.681704  396583 main.go:141] libmachine: (addons-573653) DBG | Using SSH client type: external
	I0510 17:52:44.681733  396583 main.go:141] libmachine: (addons-573653) DBG | Using SSH private key: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/addons-573653/id_rsa (-rw-------)
	I0510 17:52:44.681783  396583 main.go:141] libmachine: (addons-573653) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.219 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20720-388787/.minikube/machines/addons-573653/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0510 17:52:44.681797  396583 main.go:141] libmachine: (addons-573653) DBG | About to run SSH command:
	I0510 17:52:44.681811  396583 main.go:141] libmachine: (addons-573653) DBG | exit 0
	I0510 17:52:44.811895  396583 main.go:141] libmachine: (addons-573653) DBG | SSH cmd err, output: <nil>: 
	I0510 17:52:44.812131  396583 main.go:141] libmachine: (addons-573653) KVM machine creation complete
	I0510 17:52:44.812531  396583 main.go:141] libmachine: (addons-573653) Calling .GetConfigRaw
	I0510 17:52:44.813112  396583 main.go:141] libmachine: (addons-573653) Calling .DriverName
	I0510 17:52:44.813346  396583 main.go:141] libmachine: (addons-573653) Calling .DriverName
	I0510 17:52:44.813536  396583 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0510 17:52:44.813555  396583 main.go:141] libmachine: (addons-573653) Calling .GetState
	I0510 17:52:44.814799  396583 main.go:141] libmachine: Detecting operating system of created instance...
	I0510 17:52:44.814816  396583 main.go:141] libmachine: Waiting for SSH to be available...
	I0510 17:52:44.814824  396583 main.go:141] libmachine: Getting to WaitForSSH function...
	I0510 17:52:44.814833  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHHostname
	I0510 17:52:44.817254  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:44.817668  396583 main.go:141] libmachine: (addons-573653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:f2:75", ip: ""} in network mk-addons-573653: {Iface:virbr1 ExpiryTime:2025-05-10 18:52:35 +0000 UTC Type:0 Mac:52:54:00:68:f2:75 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:addons-573653 Clientid:01:52:54:00:68:f2:75}
	I0510 17:52:44.817696  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined IP address 192.168.39.219 and MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:44.817813  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHPort
	I0510 17:52:44.818018  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHKeyPath
	I0510 17:52:44.818169  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHKeyPath
	I0510 17:52:44.818284  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHUsername
	I0510 17:52:44.818454  396583 main.go:141] libmachine: Using SSH client type: native
	I0510 17:52:44.818672  396583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I0510 17:52:44.818685  396583 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0510 17:52:44.934836  396583 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0510 17:52:44.934864  396583 main.go:141] libmachine: Detecting the provisioner...
	I0510 17:52:44.934872  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHHostname
	I0510 17:52:44.938052  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:44.938388  396583 main.go:141] libmachine: (addons-573653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:f2:75", ip: ""} in network mk-addons-573653: {Iface:virbr1 ExpiryTime:2025-05-10 18:52:35 +0000 UTC Type:0 Mac:52:54:00:68:f2:75 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:addons-573653 Clientid:01:52:54:00:68:f2:75}
	I0510 17:52:44.938417  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined IP address 192.168.39.219 and MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:44.938644  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHPort
	I0510 17:52:44.938857  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHKeyPath
	I0510 17:52:44.939066  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHKeyPath
	I0510 17:52:44.939222  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHUsername
	I0510 17:52:44.939423  396583 main.go:141] libmachine: Using SSH client type: native
	I0510 17:52:44.939709  396583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I0510 17:52:44.939725  396583 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0510 17:52:45.061087  396583 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2024.11.2-dirty
	ID=buildroot
	VERSION_ID=2024.11.2
	PRETTY_NAME="Buildroot 2024.11.2"
	
	I0510 17:52:45.061185  396583 main.go:141] libmachine: found compatible host: buildroot
	I0510 17:52:45.061195  396583 main.go:141] libmachine: Provisioning with buildroot...
	I0510 17:52:45.061203  396583 main.go:141] libmachine: (addons-573653) Calling .GetMachineName
	I0510 17:52:45.061492  396583 buildroot.go:166] provisioning hostname "addons-573653"
	I0510 17:52:45.061520  396583 main.go:141] libmachine: (addons-573653) Calling .GetMachineName
	I0510 17:52:45.061720  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHHostname
	I0510 17:52:45.064695  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:45.065141  396583 main.go:141] libmachine: (addons-573653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:f2:75", ip: ""} in network mk-addons-573653: {Iface:virbr1 ExpiryTime:2025-05-10 18:52:35 +0000 UTC Type:0 Mac:52:54:00:68:f2:75 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:addons-573653 Clientid:01:52:54:00:68:f2:75}
	I0510 17:52:45.065170  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined IP address 192.168.39.219 and MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:45.065322  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHPort
	I0510 17:52:45.065541  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHKeyPath
	I0510 17:52:45.065754  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHKeyPath
	I0510 17:52:45.065983  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHUsername
	I0510 17:52:45.066132  396583 main.go:141] libmachine: Using SSH client type: native
	I0510 17:52:45.066375  396583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I0510 17:52:45.066397  396583 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-573653 && echo "addons-573653" | sudo tee /etc/hostname
	I0510 17:52:45.201901  396583 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-573653
	
	I0510 17:52:45.201944  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHHostname
	I0510 17:52:45.205627  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:45.206061  396583 main.go:141] libmachine: (addons-573653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:f2:75", ip: ""} in network mk-addons-573653: {Iface:virbr1 ExpiryTime:2025-05-10 18:52:35 +0000 UTC Type:0 Mac:52:54:00:68:f2:75 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:addons-573653 Clientid:01:52:54:00:68:f2:75}
	I0510 17:52:45.206096  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined IP address 192.168.39.219 and MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:45.206245  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHPort
	I0510 17:52:45.206479  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHKeyPath
	I0510 17:52:45.206659  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHKeyPath
	I0510 17:52:45.206876  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHUsername
	I0510 17:52:45.207083  396583 main.go:141] libmachine: Using SSH client type: native
	I0510 17:52:45.207332  396583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I0510 17:52:45.207349  396583 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-573653' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-573653/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-573653' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0510 17:52:45.335062  396583 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0510 17:52:45.335094  396583 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20720-388787/.minikube CaCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20720-388787/.minikube}
	I0510 17:52:45.335115  396583 buildroot.go:174] setting up certificates
	I0510 17:52:45.335127  396583 provision.go:84] configureAuth start
	I0510 17:52:45.335136  396583 main.go:141] libmachine: (addons-573653) Calling .GetMachineName
	I0510 17:52:45.335465  396583 main.go:141] libmachine: (addons-573653) Calling .GetIP
	I0510 17:52:45.338405  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:45.338699  396583 main.go:141] libmachine: (addons-573653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:f2:75", ip: ""} in network mk-addons-573653: {Iface:virbr1 ExpiryTime:2025-05-10 18:52:35 +0000 UTC Type:0 Mac:52:54:00:68:f2:75 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:addons-573653 Clientid:01:52:54:00:68:f2:75}
	I0510 17:52:45.338727  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined IP address 192.168.39.219 and MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:45.338922  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHHostname
	I0510 17:52:45.340982  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:45.341326  396583 main.go:141] libmachine: (addons-573653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:f2:75", ip: ""} in network mk-addons-573653: {Iface:virbr1 ExpiryTime:2025-05-10 18:52:35 +0000 UTC Type:0 Mac:52:54:00:68:f2:75 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:addons-573653 Clientid:01:52:54:00:68:f2:75}
	I0510 17:52:45.341357  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined IP address 192.168.39.219 and MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:45.341507  396583 provision.go:143] copyHostCerts
	I0510 17:52:45.341625  396583 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem (1123 bytes)
	I0510 17:52:45.341825  396583 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem (1675 bytes)
	I0510 17:52:45.341938  396583 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem (1078 bytes)
	I0510 17:52:45.342027  396583 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem org=jenkins.addons-573653 san=[127.0.0.1 192.168.39.219 addons-573653 localhost minikube]
	I0510 17:52:45.652466  396583 provision.go:177] copyRemoteCerts
	I0510 17:52:45.652558  396583 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0510 17:52:45.652603  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHHostname
	I0510 17:52:45.655510  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:45.655815  396583 main.go:141] libmachine: (addons-573653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:f2:75", ip: ""} in network mk-addons-573653: {Iface:virbr1 ExpiryTime:2025-05-10 18:52:35 +0000 UTC Type:0 Mac:52:54:00:68:f2:75 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:addons-573653 Clientid:01:52:54:00:68:f2:75}
	I0510 17:52:45.655837  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined IP address 192.168.39.219 and MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:45.656146  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHPort
	I0510 17:52:45.656371  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHKeyPath
	I0510 17:52:45.656508  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHUsername
	I0510 17:52:45.656710  396583 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/addons-573653/id_rsa Username:docker}
	I0510 17:52:45.748214  396583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0510 17:52:45.778810  396583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0510 17:52:45.809258  396583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0510 17:52:45.838496  396583 provision.go:87] duration metric: took 503.355391ms to configureAuth
	I0510 17:52:45.838530  396583 buildroot.go:189] setting minikube options for container-runtime
	I0510 17:52:45.838722  396583 config.go:182] Loaded profile config "addons-573653": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 17:52:45.838813  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHHostname
	I0510 17:52:45.841767  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:45.842131  396583 main.go:141] libmachine: (addons-573653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:f2:75", ip: ""} in network mk-addons-573653: {Iface:virbr1 ExpiryTime:2025-05-10 18:52:35 +0000 UTC Type:0 Mac:52:54:00:68:f2:75 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:addons-573653 Clientid:01:52:54:00:68:f2:75}
	I0510 17:52:45.842170  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined IP address 192.168.39.219 and MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:45.842409  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHPort
	I0510 17:52:45.842613  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHKeyPath
	I0510 17:52:45.842752  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHKeyPath
	I0510 17:52:45.842867  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHUsername
	I0510 17:52:45.843018  396583 main.go:141] libmachine: Using SSH client type: native
	I0510 17:52:45.843252  396583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I0510 17:52:45.843277  396583 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0510 17:52:46.096984  396583 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0510 17:52:46.097012  396583 main.go:141] libmachine: Checking connection to Docker...
	I0510 17:52:46.097025  396583 main.go:141] libmachine: (addons-573653) Calling .GetURL
	I0510 17:52:46.098502  396583 main.go:141] libmachine: (addons-573653) DBG | using libvirt version 6000000
	I0510 17:52:46.100897  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:46.101316  396583 main.go:141] libmachine: (addons-573653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:f2:75", ip: ""} in network mk-addons-573653: {Iface:virbr1 ExpiryTime:2025-05-10 18:52:35 +0000 UTC Type:0 Mac:52:54:00:68:f2:75 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:addons-573653 Clientid:01:52:54:00:68:f2:75}
	I0510 17:52:46.101348  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined IP address 192.168.39.219 and MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:46.101563  396583 main.go:141] libmachine: Docker is up and running!
	I0510 17:52:46.101584  396583 main.go:141] libmachine: Reticulating splines...
	I0510 17:52:46.101591  396583 client.go:171] duration metric: took 26.370043582s to LocalClient.Create
	I0510 17:52:46.101614  396583 start.go:167] duration metric: took 26.370103578s to libmachine.API.Create "addons-573653"
	I0510 17:52:46.101628  396583 start.go:293] postStartSetup for "addons-573653" (driver="kvm2")
	I0510 17:52:46.101643  396583 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0510 17:52:46.101661  396583 main.go:141] libmachine: (addons-573653) Calling .DriverName
	I0510 17:52:46.101969  396583 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0510 17:52:46.102010  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHHostname
	I0510 17:52:46.104226  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:46.104627  396583 main.go:141] libmachine: (addons-573653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:f2:75", ip: ""} in network mk-addons-573653: {Iface:virbr1 ExpiryTime:2025-05-10 18:52:35 +0000 UTC Type:0 Mac:52:54:00:68:f2:75 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:addons-573653 Clientid:01:52:54:00:68:f2:75}
	I0510 17:52:46.104662  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined IP address 192.168.39.219 and MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:46.104854  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHPort
	I0510 17:52:46.105102  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHKeyPath
	I0510 17:52:46.105269  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHUsername
	I0510 17:52:46.105417  396583 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/addons-573653/id_rsa Username:docker}
	I0510 17:52:46.195717  396583 ssh_runner.go:195] Run: cat /etc/os-release
	I0510 17:52:46.200562  396583 info.go:137] Remote host: Buildroot 2024.11.2
	I0510 17:52:46.200601  396583 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-388787/.minikube/addons for local assets ...
	I0510 17:52:46.200711  396583 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-388787/.minikube/files for local assets ...
	I0510 17:52:46.200749  396583 start.go:296] duration metric: took 99.109077ms for postStartSetup
	I0510 17:52:46.200800  396583 main.go:141] libmachine: (addons-573653) Calling .GetConfigRaw
	I0510 17:52:46.201454  396583 main.go:141] libmachine: (addons-573653) Calling .GetIP
	I0510 17:52:46.204172  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:46.204592  396583 main.go:141] libmachine: (addons-573653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:f2:75", ip: ""} in network mk-addons-573653: {Iface:virbr1 ExpiryTime:2025-05-10 18:52:35 +0000 UTC Type:0 Mac:52:54:00:68:f2:75 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:addons-573653 Clientid:01:52:54:00:68:f2:75}
	I0510 17:52:46.204626  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined IP address 192.168.39.219 and MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:46.204838  396583 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/config.json ...
	I0510 17:52:46.205071  396583 start.go:128] duration metric: took 26.492652942s to createHost
	I0510 17:52:46.205103  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHHostname
	I0510 17:52:46.207332  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:46.207686  396583 main.go:141] libmachine: (addons-573653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:f2:75", ip: ""} in network mk-addons-573653: {Iface:virbr1 ExpiryTime:2025-05-10 18:52:35 +0000 UTC Type:0 Mac:52:54:00:68:f2:75 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:addons-573653 Clientid:01:52:54:00:68:f2:75}
	I0510 17:52:46.207714  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined IP address 192.168.39.219 and MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:46.207857  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHPort
	I0510 17:52:46.208078  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHKeyPath
	I0510 17:52:46.208233  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHKeyPath
	I0510 17:52:46.208349  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHUsername
	I0510 17:52:46.208490  396583 main.go:141] libmachine: Using SSH client type: native
	I0510 17:52:46.208706  396583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I0510 17:52:46.208719  396583 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0510 17:52:46.324726  396583 main.go:141] libmachine: SSH cmd err, output: <nil>: 1746899566.301120623
	
	I0510 17:52:46.324756  396583 fix.go:216] guest clock: 1746899566.301120623
	I0510 17:52:46.324768  396583 fix.go:229] Guest: 2025-05-10 17:52:46.301120623 +0000 UTC Remote: 2025-05-10 17:52:46.205089197 +0000 UTC m=+26.601798804 (delta=96.031426ms)
	I0510 17:52:46.324825  396583 fix.go:200] guest clock delta is within tolerance: 96.031426ms
	I0510 17:52:46.324834  396583 start.go:83] releasing machines lock for "addons-573653", held for 26.612523471s
	I0510 17:52:46.324872  396583 main.go:141] libmachine: (addons-573653) Calling .DriverName
	I0510 17:52:46.325195  396583 main.go:141] libmachine: (addons-573653) Calling .GetIP
	I0510 17:52:46.329230  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:46.329685  396583 main.go:141] libmachine: (addons-573653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:f2:75", ip: ""} in network mk-addons-573653: {Iface:virbr1 ExpiryTime:2025-05-10 18:52:35 +0000 UTC Type:0 Mac:52:54:00:68:f2:75 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:addons-573653 Clientid:01:52:54:00:68:f2:75}
	I0510 17:52:46.329718  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined IP address 192.168.39.219 and MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:46.329973  396583 main.go:141] libmachine: (addons-573653) Calling .DriverName
	I0510 17:52:46.330582  396583 main.go:141] libmachine: (addons-573653) Calling .DriverName
	I0510 17:52:46.330800  396583 main.go:141] libmachine: (addons-573653) Calling .DriverName
	I0510 17:52:46.330912  396583 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0510 17:52:46.330977  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHHostname
	I0510 17:52:46.331092  396583 ssh_runner.go:195] Run: cat /version.json
	I0510 17:52:46.331129  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHHostname
	I0510 17:52:46.333872  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:46.334125  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:46.334231  396583 main.go:141] libmachine: (addons-573653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:f2:75", ip: ""} in network mk-addons-573653: {Iface:virbr1 ExpiryTime:2025-05-10 18:52:35 +0000 UTC Type:0 Mac:52:54:00:68:f2:75 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:addons-573653 Clientid:01:52:54:00:68:f2:75}
	I0510 17:52:46.334266  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined IP address 192.168.39.219 and MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:46.334419  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHPort
	I0510 17:52:46.334511  396583 main.go:141] libmachine: (addons-573653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:f2:75", ip: ""} in network mk-addons-573653: {Iface:virbr1 ExpiryTime:2025-05-10 18:52:35 +0000 UTC Type:0 Mac:52:54:00:68:f2:75 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:addons-573653 Clientid:01:52:54:00:68:f2:75}
	I0510 17:52:46.334532  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined IP address 192.168.39.219 and MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:46.334613  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHKeyPath
	I0510 17:52:46.334701  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHPort
	I0510 17:52:46.334780  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHUsername
	I0510 17:52:46.334883  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHKeyPath
	I0510 17:52:46.334894  396583 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/addons-573653/id_rsa Username:docker}
	I0510 17:52:46.335028  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHUsername
	I0510 17:52:46.335167  396583 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/addons-573653/id_rsa Username:docker}
	I0510 17:52:46.444280  396583 ssh_runner.go:195] Run: systemctl --version
	I0510 17:52:46.450943  396583 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0510 17:52:46.619528  396583 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0510 17:52:46.626287  396583 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0510 17:52:46.626377  396583 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0510 17:52:46.648303  396583 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0510 17:52:46.648346  396583 start.go:495] detecting cgroup driver to use...
	I0510 17:52:46.648469  396583 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0510 17:52:46.668751  396583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0510 17:52:46.686210  396583 docker.go:225] disabling cri-docker service (if available) ...
	I0510 17:52:46.686296  396583 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0510 17:52:46.703316  396583 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0510 17:52:46.719927  396583 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0510 17:52:46.863859  396583 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0510 17:52:47.008014  396583 docker.go:241] disabling docker service ...
	I0510 17:52:47.008119  396583 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0510 17:52:47.025163  396583 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0510 17:52:47.041107  396583 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0510 17:52:47.225110  396583 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0510 17:52:47.369156  396583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0510 17:52:47.385819  396583 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0510 17:52:47.409204  396583 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0510 17:52:47.409281  396583 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:47.421801  396583 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0510 17:52:47.421900  396583 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:47.434305  396583 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:47.446434  396583 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:47.458521  396583 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0510 17:52:47.471376  396583 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:47.483289  396583 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:47.506409  396583 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:47.519095  396583 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0510 17:52:47.530050  396583 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0510 17:52:47.530140  396583 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0510 17:52:47.546101  396583 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0510 17:52:47.558270  396583 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 17:52:47.697199  396583 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0510 17:52:47.804867  396583 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0510 17:52:47.804969  396583 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0510 17:52:47.810246  396583 start.go:563] Will wait 60s for crictl version
	I0510 17:52:47.810344  396583 ssh_runner.go:195] Run: which crictl
	I0510 17:52:47.814685  396583 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0510 17:52:47.860960  396583 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0510 17:52:47.861085  396583 ssh_runner.go:195] Run: crio --version
	I0510 17:52:47.891226  396583 ssh_runner.go:195] Run: crio --version
	I0510 17:52:47.987500  396583 out.go:177] * Preparing Kubernetes v1.33.0 on CRI-O 1.29.1 ...
	I0510 17:52:48.042978  396583 main.go:141] libmachine: (addons-573653) Calling .GetIP
	I0510 17:52:48.045928  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:48.046325  396583 main.go:141] libmachine: (addons-573653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:f2:75", ip: ""} in network mk-addons-573653: {Iface:virbr1 ExpiryTime:2025-05-10 18:52:35 +0000 UTC Type:0 Mac:52:54:00:68:f2:75 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:addons-573653 Clientid:01:52:54:00:68:f2:75}
	I0510 17:52:48.046355  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined IP address 192.168.39.219 and MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:52:48.046603  396583 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0510 17:52:48.051287  396583 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 17:52:48.066632  396583 kubeadm.go:875] updating cluster {Name:addons-573653 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.
0 ClusterName:addons-573653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.219 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0510 17:52:48.066778  396583 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
	I0510 17:52:48.066830  396583 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 17:52:48.105011  396583 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.33.0". assuming images are not preloaded.
	I0510 17:52:48.105096  396583 ssh_runner.go:195] Run: which lz4
	I0510 17:52:48.109393  396583 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0510 17:52:48.114027  396583 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0510 17:52:48.114061  396583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (413217622 bytes)
	I0510 17:52:49.701397  396583 crio.go:462] duration metric: took 1.59203551s to copy over tarball
	I0510 17:52:49.701495  396583 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0510 17:52:51.649455  396583 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.947917627s)
	I0510 17:52:51.649515  396583 crio.go:469] duration metric: took 1.94807704s to extract the tarball
	I0510 17:52:51.649529  396583 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0510 17:52:51.690431  396583 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 17:52:51.736556  396583 crio.go:514] all images are preloaded for cri-o runtime.
	I0510 17:52:51.736587  396583 cache_images.go:84] Images are preloaded, skipping loading
	I0510 17:52:51.736596  396583 kubeadm.go:926] updating node { 192.168.39.219 8443 v1.33.0 crio true true} ...
	I0510 17:52:51.736729  396583 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-573653 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.219
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.0 ClusterName:addons-573653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0510 17:52:51.736820  396583 ssh_runner.go:195] Run: crio config
	I0510 17:52:51.782775  396583 cni.go:84] Creating CNI manager for ""
	I0510 17:52:51.782798  396583 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0510 17:52:51.782808  396583 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0510 17:52:51.782836  396583 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.219 APIServerPort:8443 KubernetesVersion:v1.33.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-573653 NodeName:addons-573653 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.219"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.219 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0510 17:52:51.783040  396583 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.219
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-573653"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.219"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.219"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0510 17:52:51.783120  396583 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.0
	I0510 17:52:51.795016  396583 binaries.go:44] Found k8s binaries, skipping transfer
	I0510 17:52:51.795096  396583 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0510 17:52:51.806521  396583 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0510 17:52:51.832247  396583 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0510 17:52:51.852167  396583 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0510 17:52:51.872316  396583 ssh_runner.go:195] Run: grep 192.168.39.219	control-plane.minikube.internal$ /etc/hosts
	I0510 17:52:51.876849  396583 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.219	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 17:52:51.891411  396583 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 17:52:52.022782  396583 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 17:52:52.056321  396583 certs.go:68] Setting up /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653 for IP: 192.168.39.219
	I0510 17:52:52.056355  396583 certs.go:194] generating shared ca certs ...
	I0510 17:52:52.056381  396583 certs.go:226] acquiring lock for ca certs: {Name:mk8db74782205da4ac57ef815dd495cda255251a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:52:52.056591  396583 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20720-388787/.minikube/ca.key
	I0510 17:52:52.626697  396583 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt ...
	I0510 17:52:52.626737  396583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt: {Name:mk91d3dfe7526661a46521a25f417f9eb4864367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:52:52.626946  396583 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20720-388787/.minikube/ca.key ...
	I0510 17:52:52.626965  396583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/.minikube/ca.key: {Name:mk863fbbe52656e40d070d8b9e5a0b43cd11b55a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:52:52.627079  396583 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.key
	I0510 17:52:52.767756  396583 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.crt ...
	I0510 17:52:52.767790  396583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.crt: {Name:mk879748f5e402abf655754f0fd48c6aed8e088f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:52:52.767970  396583 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.key ...
	I0510 17:52:52.767984  396583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.key: {Name:mkc6113e7a85a11fb44b0d1408222a3cdee538a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:52:52.768084  396583 certs.go:256] generating profile certs ...
	I0510 17:52:52.768163  396583 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.key
	I0510 17:52:52.768179  396583 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt with IP's: []
	I0510 17:52:52.880137  396583 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt ...
	I0510 17:52:52.880172  396583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: {Name:mkc45c4aebf1d11f3047bf0077abea3509e7132b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:52:52.880366  396583 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.key ...
	I0510 17:52:52.880392  396583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.key: {Name:mkcbb8cd09a4f8ce08965024b8431737029040d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:52:52.880495  396583 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/apiserver.key.fed00a33
	I0510 17:52:52.880524  396583 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/apiserver.crt.fed00a33 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.219]
	I0510 17:52:52.952917  396583 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/apiserver.crt.fed00a33 ...
	I0510 17:52:52.952951  396583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/apiserver.crt.fed00a33: {Name:mk49296d6054aeb04d2eb89fd63dd265b90efac6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:52:52.953139  396583 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/apiserver.key.fed00a33 ...
	I0510 17:52:52.953162  396583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/apiserver.key.fed00a33: {Name:mkc865790c9255822521bf07a787788d8269ef89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:52:52.953273  396583 certs.go:381] copying /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/apiserver.crt.fed00a33 -> /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/apiserver.crt
	I0510 17:52:52.953372  396583 certs.go:385] copying /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/apiserver.key.fed00a33 -> /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/apiserver.key
	I0510 17:52:52.953446  396583 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/proxy-client.key
	I0510 17:52:52.953470  396583 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/proxy-client.crt with IP's: []
	I0510 17:52:53.019403  396583 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/proxy-client.crt ...
	I0510 17:52:53.019438  396583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/proxy-client.crt: {Name:mk5ed015a864ee8680a0297380919b5378da63d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:52:53.019644  396583 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/proxy-client.key ...
	I0510 17:52:53.019664  396583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/proxy-client.key: {Name:mk2de77b2e07d110dfea7df3149f134a2d551f51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:52:53.019894  396583 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem (1679 bytes)
	I0510 17:52:53.019934  396583 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem (1078 bytes)
	I0510 17:52:53.019968  396583 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem (1123 bytes)
	I0510 17:52:53.020002  396583 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem (1675 bytes)
	I0510 17:52:53.020595  396583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0510 17:52:53.054525  396583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0510 17:52:53.089987  396583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0510 17:52:53.125328  396583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0510 17:52:53.159013  396583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0510 17:52:53.195617  396583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0510 17:52:53.230290  396583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0510 17:52:53.264290  396583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0510 17:52:53.295959  396583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0510 17:52:53.327043  396583 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0510 17:52:53.349080  396583 ssh_runner.go:195] Run: openssl version
	I0510 17:52:53.356013  396583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0510 17:52:53.370186  396583 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0510 17:52:53.376047  396583 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 10 17:52 /usr/share/ca-certificates/minikubeCA.pem
	I0510 17:52:53.376115  396583 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0510 17:52:53.383790  396583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0510 17:52:53.399229  396583 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0510 17:52:53.404405  396583 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0510 17:52:53.404486  396583 kubeadm.go:392] StartCluster: {Name:addons-573653 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 C
lusterName:addons-573653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.219 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:52:53.404608  396583 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0510 17:52:53.404672  396583 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0510 17:52:53.450027  396583 cri.go:89] found id: ""
	I0510 17:52:53.450098  396583 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0510 17:52:53.463364  396583 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0510 17:52:53.475899  396583 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0510 17:52:53.488998  396583 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0510 17:52:53.489023  396583 kubeadm.go:157] found existing configuration files:
	
	I0510 17:52:53.489090  396583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0510 17:52:53.500821  396583 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0510 17:52:53.500897  396583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0510 17:52:53.513638  396583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0510 17:52:53.525960  396583 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0510 17:52:53.526043  396583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0510 17:52:53.539913  396583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0510 17:52:53.553850  396583 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0510 17:52:53.553938  396583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0510 17:52:53.567089  396583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0510 17:52:53.579400  396583 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0510 17:52:53.579474  396583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0510 17:52:53.591492  396583 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0510 17:52:53.646189  396583 kubeadm.go:310] [init] Using Kubernetes version: v1.33.0
	I0510 17:52:53.646283  396583 kubeadm.go:310] [preflight] Running pre-flight checks
	I0510 17:52:53.766517  396583 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0510 17:52:53.766680  396583 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0510 17:52:53.766775  396583 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0510 17:52:53.778691  396583 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0510 17:52:53.847203  396583 out.go:235]   - Generating certificates and keys ...
	I0510 17:52:53.847372  396583 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0510 17:52:53.847461  396583 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0510 17:52:54.205077  396583 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0510 17:52:54.621731  396583 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0510 17:52:54.912488  396583 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0510 17:52:54.930494  396583 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0510 17:52:55.521044  396583 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0510 17:52:55.521211  396583 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-573653 localhost] and IPs [192.168.39.219 127.0.0.1 ::1]
	I0510 17:52:55.974772  396583 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0510 17:52:55.974945  396583 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-573653 localhost] and IPs [192.168.39.219 127.0.0.1 ::1]
	I0510 17:52:56.496520  396583 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0510 17:52:56.908901  396583 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0510 17:52:57.125010  396583 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0510 17:52:57.125109  396583 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0510 17:52:57.210689  396583 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0510 17:52:57.318308  396583 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0510 17:52:57.445533  396583 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0510 17:52:57.861132  396583 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0510 17:52:58.170197  396583 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0510 17:52:58.170830  396583 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0510 17:52:58.173362  396583 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0510 17:52:58.182228  396583 out.go:235]   - Booting up control plane ...
	I0510 17:52:58.182357  396583 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0510 17:52:58.182454  396583 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0510 17:52:58.182533  396583 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0510 17:52:58.222577  396583 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0510 17:52:58.231889  396583 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0510 17:52:58.231959  396583 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0510 17:52:58.417418  396583 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0510 17:52:58.418403  396583 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0510 17:52:59.420050  396583 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001910848s
	I0510 17:52:59.422541  396583 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0510 17:52:59.422688  396583 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.39.219:8443/livez
	I0510 17:52:59.422820  396583 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0510 17:52:59.422962  396583 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0510 17:53:00.960596  396583 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.537979932s
	I0510 17:53:02.405218  396583 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.983816356s
	I0510 17:53:04.423571  396583 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.002623629s
	I0510 17:53:04.442676  396583 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0510 17:53:04.475002  396583 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0510 17:53:04.510412  396583 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0510 17:53:04.510678  396583 kubeadm.go:310] [mark-control-plane] Marking the node addons-573653 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0510 17:53:04.526786  396583 kubeadm.go:310] [bootstrap-token] Using token: iolqu4.2602tpnxk4hb9ea1
	I0510 17:53:04.528366  396583 out.go:235]   - Configuring RBAC rules ...
	I0510 17:53:04.528506  396583 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0510 17:53:04.537468  396583 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0510 17:53:04.549093  396583 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0510 17:53:04.553950  396583 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0510 17:53:04.558197  396583 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0510 17:53:04.566146  396583 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0510 17:53:04.830721  396583 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0510 17:53:05.265317  396583 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0510 17:53:05.830723  396583 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0510 17:53:05.831692  396583 kubeadm.go:310] 
	I0510 17:53:05.831826  396583 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0510 17:53:05.831851  396583 kubeadm.go:310] 
	I0510 17:53:05.831997  396583 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0510 17:53:05.832020  396583 kubeadm.go:310] 
	I0510 17:53:05.832068  396583 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0510 17:53:05.832153  396583 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0510 17:53:05.832237  396583 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0510 17:53:05.832246  396583 kubeadm.go:310] 
	I0510 17:53:05.832323  396583 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0510 17:53:05.832334  396583 kubeadm.go:310] 
	I0510 17:53:05.832416  396583 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0510 17:53:05.832430  396583 kubeadm.go:310] 
	I0510 17:53:05.832525  396583 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0510 17:53:05.832657  396583 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0510 17:53:05.832763  396583 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0510 17:53:05.832775  396583 kubeadm.go:310] 
	I0510 17:53:05.832897  396583 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0510 17:53:05.833017  396583 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0510 17:53:05.833027  396583 kubeadm.go:310] 
	I0510 17:53:05.833147  396583 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token iolqu4.2602tpnxk4hb9ea1 \
	I0510 17:53:05.833301  396583 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:36d5d1ffe285d4a22c72a5a826b0bfd96aa5c48f98bbffd2d05282b2517c8034 \
	I0510 17:53:05.833347  396583 kubeadm.go:310] 	--control-plane 
	I0510 17:53:05.833355  396583 kubeadm.go:310] 
	I0510 17:53:05.833484  396583 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0510 17:53:05.833504  396583 kubeadm.go:310] 
	I0510 17:53:05.833615  396583 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token iolqu4.2602tpnxk4hb9ea1 \
	I0510 17:53:05.833754  396583 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:36d5d1ffe285d4a22c72a5a826b0bfd96aa5c48f98bbffd2d05282b2517c8034 
	I0510 17:53:05.835012  396583 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0510 17:53:05.835042  396583 cni.go:84] Creating CNI manager for ""
	I0510 17:53:05.835053  396583 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0510 17:53:05.837008  396583 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0510 17:53:05.838442  396583 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0510 17:53:05.854219  396583 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0510 17:53:05.881589  396583 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0510 17:53:05.881749  396583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 17:53:05.881808  396583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-573653 minikube.k8s.io/updated_at=2025_05_10T17_53_05_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=e96c83983357cd8557f3cdfe077a25cc73d485a4 minikube.k8s.io/name=addons-573653 minikube.k8s.io/primary=true
	I0510 17:53:05.929497  396583 ops.go:34] apiserver oom_adj: -16
	I0510 17:53:06.058444  396583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 17:53:06.558923  396583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 17:53:07.059457  396583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 17:53:07.559393  396583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 17:53:08.059413  396583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 17:53:08.558707  396583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 17:53:09.059396  396583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 17:53:09.558618  396583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 17:53:10.058745  396583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 17:53:10.161846  396583 kubeadm.go:1105] duration metric: took 4.280188071s to wait for elevateKubeSystemPrivileges
	I0510 17:53:10.161895  396583 kubeadm.go:394] duration metric: took 16.757417461s to StartCluster
	I0510 17:53:10.161928  396583 settings.go:142] acquiring lock: {Name:mk4ab6a112c947bfdedd8044017a7c560266fb5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:53:10.162100  396583 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20720-388787/kubeconfig
	I0510 17:53:10.162690  396583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/kubeconfig: {Name:mk5ad7285fe4c17b2779ea6d5a539f101fe94797 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:53:10.163096  396583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0510 17:53:10.163086  396583 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.219 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0510 17:53:10.163129  396583 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0510 17:53:10.163342  396583 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-573653"
	I0510 17:53:10.163357  396583 addons.go:69] Setting yakd=true in profile "addons-573653"
	I0510 17:53:10.163359  396583 config.go:182] Loaded profile config "addons-573653": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 17:53:10.163387  396583 addons.go:238] Setting addon yakd=true in "addons-573653"
	I0510 17:53:10.163416  396583 addons.go:69] Setting metrics-server=true in profile "addons-573653"
	I0510 17:53:10.163408  396583 addons.go:69] Setting inspektor-gadget=true in profile "addons-573653"
	I0510 17:53:10.163431  396583 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-573653"
	I0510 17:53:10.163444  396583 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-573653"
	I0510 17:53:10.163394  396583 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-573653"
	I0510 17:53:10.163451  396583 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-573653"
	I0510 17:53:10.163466  396583 host.go:66] Checking if "addons-573653" exists ...
	I0510 17:53:10.163465  396583 addons.go:238] Setting addon inspektor-gadget=true in "addons-573653"
	I0510 17:53:10.163493  396583 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-573653"
	I0510 17:53:10.163524  396583 host.go:66] Checking if "addons-573653" exists ...
	I0510 17:53:10.163537  396583 host.go:66] Checking if "addons-573653" exists ...
	I0510 17:53:10.163529  396583 addons.go:69] Setting registry=true in profile "addons-573653"
	I0510 17:53:10.163529  396583 addons.go:69] Setting default-storageclass=true in profile "addons-573653"
	I0510 17:53:10.163638  396583 addons.go:238] Setting addon registry=true in "addons-573653"
	I0510 17:53:10.163657  396583 addons.go:69] Setting cloud-spanner=true in profile "addons-573653"
	I0510 17:53:10.163672  396583 addons.go:238] Setting addon cloud-spanner=true in "addons-573653"
	I0510 17:53:10.163710  396583 host.go:66] Checking if "addons-573653" exists ...
	I0510 17:53:10.163722  396583 host.go:66] Checking if "addons-573653" exists ...
	I0510 17:53:10.163971  396583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 17:53:10.163994  396583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:53:10.163995  396583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 17:53:10.164034  396583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:53:10.164098  396583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 17:53:10.164116  396583 addons.go:69] Setting gcp-auth=true in profile "addons-573653"
	I0510 17:53:10.164130  396583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:53:10.164136  396583 mustload.go:65] Loading cluster: addons-573653
	I0510 17:53:10.164162  396583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 17:53:10.164217  396583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:53:10.164295  396583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 17:53:10.164324  396583 config.go:182] Loaded profile config "addons-573653": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 17:53:10.164413  396583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:53:10.164503  396583 addons.go:69] Setting volumesnapshots=true in profile "addons-573653"
	I0510 17:53:10.164526  396583 addons.go:238] Setting addon volumesnapshots=true in "addons-573653"
	I0510 17:53:10.164556  396583 host.go:66] Checking if "addons-573653" exists ...
	I0510 17:53:10.164784  396583 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-573653"
	I0510 17:53:10.163419  396583 addons.go:69] Setting volcano=true in profile "addons-573653"
	I0510 17:53:10.164828  396583 addons.go:238] Setting addon volcano=true in "addons-573653"
	I0510 17:53:10.164846  396583 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-573653"
	I0510 17:53:10.164869  396583 host.go:66] Checking if "addons-573653" exists ...
	I0510 17:53:10.164887  396583 host.go:66] Checking if "addons-573653" exists ...
	I0510 17:53:10.164911  396583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 17:53:10.164964  396583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:53:10.165029  396583 addons.go:69] Setting ingress=true in profile "addons-573653"
	I0510 17:53:10.165048  396583 addons.go:238] Setting addon ingress=true in "addons-573653"
	I0510 17:53:10.165064  396583 addons.go:69] Setting ingress-dns=true in profile "addons-573653"
	I0510 17:53:10.165078  396583 addons.go:238] Setting addon ingress-dns=true in "addons-573653"
	I0510 17:53:10.163642  396583 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-573653"
	I0510 17:53:10.163445  396583 host.go:66] Checking if "addons-573653" exists ...
	I0510 17:53:10.163436  396583 addons.go:238] Setting addon metrics-server=true in "addons-573653"
	I0510 17:53:10.163405  396583 addons.go:69] Setting storage-provisioner=true in profile "addons-573653"
	I0510 17:53:10.165243  396583 addons.go:238] Setting addon storage-provisioner=true in "addons-573653"
	I0510 17:53:10.164098  396583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 17:53:10.165400  396583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:53:10.165475  396583 out.go:177] * Verifying Kubernetes components...
	I0510 17:53:10.165721  396583 host.go:66] Checking if "addons-573653" exists ...
	I0510 17:53:10.165775  396583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 17:53:10.165846  396583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 17:53:10.165871  396583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:53:10.166200  396583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 17:53:10.166334  396583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:53:10.166422  396583 host.go:66] Checking if "addons-573653" exists ...
	I0510 17:53:10.166445  396583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 17:53:10.166491  396583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:53:10.166716  396583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:53:10.166799  396583 host.go:66] Checking if "addons-573653" exists ...
	I0510 17:53:10.166843  396583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 17:53:10.166875  396583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:53:10.167069  396583 host.go:66] Checking if "addons-573653" exists ...
	I0510 17:53:10.167182  396583 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 17:53:10.167361  396583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 17:53:10.167395  396583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:53:10.185139  396583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44379
	I0510 17:53:10.185153  396583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41227
	I0510 17:53:10.185689  396583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37133
	I0510 17:53:10.185875  396583 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:53:10.186212  396583 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:53:10.186475  396583 main.go:141] libmachine: Using API Version  1
	I0510 17:53:10.186498  396583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:53:10.186832  396583 main.go:141] libmachine: Using API Version  1
	I0510 17:53:10.186855  396583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:53:10.187086  396583 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:53:10.187156  396583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37543
	I0510 17:53:10.187275  396583 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:53:10.187438  396583 main.go:141] libmachine: (addons-573653) Calling .GetState
	I0510 17:53:10.199839  396583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32855
	I0510 17:53:10.200136  396583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 17:53:10.200175  396583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:53:10.200450  396583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 17:53:10.200488  396583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:53:10.201341  396583 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:53:10.201474  396583 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:53:10.201742  396583 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-573653"
	I0510 17:53:10.201802  396583 host.go:66] Checking if "addons-573653" exists ...
	I0510 17:53:10.201908  396583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 17:53:10.201952  396583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:53:10.202206  396583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 17:53:10.202260  396583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:53:10.202992  396583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 17:53:10.203047  396583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:53:10.203284  396583 main.go:141] libmachine: Using API Version  1
	I0510 17:53:10.203316  396583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:53:10.203498  396583 main.go:141] libmachine: Using API Version  1
	I0510 17:53:10.203520  396583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:53:10.203827  396583 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:53:10.204407  396583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 17:53:10.204440  396583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:53:10.207701  396583 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:53:10.207826  396583 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:53:10.208555  396583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 17:53:10.208605  396583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:53:10.211509  396583 main.go:141] libmachine: Using API Version  1
	I0510 17:53:10.211530  396583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:53:10.211603  396583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43791
	I0510 17:53:10.212018  396583 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:53:10.212689  396583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 17:53:10.212741  396583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:53:10.215223  396583 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:53:10.215906  396583 main.go:141] libmachine: Using API Version  1
	I0510 17:53:10.215936  396583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:53:10.216443  396583 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:53:10.216770  396583 main.go:141] libmachine: (addons-573653) Calling .GetState
	I0510 17:53:10.218954  396583 host.go:66] Checking if "addons-573653" exists ...
	I0510 17:53:10.219426  396583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 17:53:10.219468  396583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:53:10.223028  396583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38201
	I0510 17:53:10.223709  396583 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:53:10.224394  396583 main.go:141] libmachine: Using API Version  1
	I0510 17:53:10.224414  396583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:53:10.225060  396583 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:53:10.226141  396583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 17:53:10.226195  396583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:53:10.231687  396583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45441
	I0510 17:53:10.232391  396583 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:53:10.233126  396583 main.go:141] libmachine: Using API Version  1
	I0510 17:53:10.233149  396583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:53:10.233659  396583 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:53:10.233904  396583 main.go:141] libmachine: (addons-573653) Calling .GetState
	I0510 17:53:10.236825  396583 addons.go:238] Setting addon default-storageclass=true in "addons-573653"
	I0510 17:53:10.236877  396583 host.go:66] Checking if "addons-573653" exists ...
	I0510 17:53:10.237284  396583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 17:53:10.237330  396583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:53:10.240874  396583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32897
	I0510 17:53:10.241586  396583 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:53:10.241746  396583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45705
	I0510 17:53:10.242301  396583 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:53:10.242509  396583 main.go:141] libmachine: Using API Version  1
	I0510 17:53:10.242522  396583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:53:10.243095  396583 main.go:141] libmachine: Using API Version  1
	I0510 17:53:10.243113  396583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:53:10.243565  396583 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:53:10.243967  396583 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:53:10.244567  396583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 17:53:10.244598  396583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:53:10.247922  396583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 17:53:10.247972  396583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:53:10.249212  396583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34171
	I0510 17:53:10.254490  396583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41443
	I0510 17:53:10.255214  396583 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:53:10.255508  396583 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:53:10.256390  396583 main.go:141] libmachine: Using API Version  1
	I0510 17:53:10.256420  396583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:53:10.256932  396583 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:53:10.257608  396583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 17:53:10.257666  396583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:53:10.257960  396583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34487
	I0510 17:53:10.258112  396583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35085
	I0510 17:53:10.258329  396583 main.go:141] libmachine: Using API Version  1
	I0510 17:53:10.258344  396583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:53:10.258490  396583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39363
	I0510 17:53:10.259058  396583 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:53:10.259139  396583 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:53:10.259458  396583 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:53:10.259545  396583 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:53:10.259773  396583 main.go:141] libmachine: Using API Version  1
	I0510 17:53:10.259793  396583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:53:10.259990  396583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35803
	I0510 17:53:10.260122  396583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33057
	I0510 17:53:10.260411  396583 main.go:141] libmachine: Using API Version  1
	I0510 17:53:10.260426  396583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:53:10.261095  396583 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:53:10.261319  396583 main.go:141] libmachine: (addons-573653) Calling .GetState
	I0510 17:53:10.262459  396583 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:53:10.263068  396583 main.go:141] libmachine: Using API Version  1
	I0510 17:53:10.263089  396583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:53:10.263173  396583 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:53:10.263612  396583 main.go:141] libmachine: (addons-573653) Calling .DriverName
	I0510 17:53:10.264133  396583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 17:53:10.264178  396583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:53:10.264381  396583 main.go:141] libmachine: Using API Version  1
	I0510 17:53:10.264396  396583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:53:10.264413  396583 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:53:10.264575  396583 main.go:141] libmachine: (addons-573653) Calling .GetState
	I0510 17:53:10.264960  396583 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:53:10.265524  396583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 17:53:10.265566  396583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:53:10.266226  396583 main.go:141] libmachine: Using API Version  1
	I0510 17:53:10.266251  396583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:53:10.266395  396583 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.33
	I0510 17:53:10.266562  396583 main.go:141] libmachine: (addons-573653) Calling .DriverName
	I0510 17:53:10.266641  396583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36711
	I0510 17:53:10.267436  396583 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:53:10.267545  396583 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:53:10.267838  396583 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0510 17:53:10.267857  396583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0510 17:53:10.267877  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHHostname
	I0510 17:53:10.268152  396583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 17:53:10.268191  396583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:53:10.269131  396583 main.go:141] libmachine: Using API Version  1
	I0510 17:53:10.269150  396583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:53:10.269182  396583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43663
	I0510 17:53:10.269226  396583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42155
	I0510 17:53:10.269498  396583 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:53:10.269668  396583 main.go:141] libmachine: (addons-573653) Calling .GetState
	I0510 17:53:10.269690  396583 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0510 17:53:10.269741  396583 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:53:10.269841  396583 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:53:10.270177  396583 main.go:141] libmachine: Using API Version  1
	I0510 17:53:10.270193  396583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:53:10.270264  396583 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:53:10.270725  396583 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:53:10.271415  396583 main.go:141] libmachine: (addons-573653) Calling .GetState
	I0510 17:53:10.271572  396583 main.go:141] libmachine: Using API Version  1
	I0510 17:53:10.271585  396583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:53:10.272318  396583 out.go:177]   - Using image docker.io/registry:3.0.0
	I0510 17:53:10.272616  396583 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:53:10.273270  396583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 17:53:10.273299  396583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:53:10.273446  396583 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0510 17:53:10.273461  396583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0510 17:53:10.273478  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHHostname
	I0510 17:53:10.274476  396583 main.go:141] libmachine: (addons-573653) Calling .DriverName
	I0510 17:53:10.274995  396583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 17:53:10.275050  396583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:53:10.275845  396583 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0510 17:53:10.276713  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:53:10.277045  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:53:10.277509  396583 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0510 17:53:10.277529  396583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0510 17:53:10.277549  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHHostname
	I0510 17:53:10.277662  396583 main.go:141] libmachine: (addons-573653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:f2:75", ip: ""} in network mk-addons-573653: {Iface:virbr1 ExpiryTime:2025-05-10 18:52:35 +0000 UTC Type:0 Mac:52:54:00:68:f2:75 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:addons-573653 Clientid:01:52:54:00:68:f2:75}
	I0510 17:53:10.277694  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined IP address 192.168.39.219 and MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:53:10.277727  396583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37713
	I0510 17:53:10.278177  396583 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:53:10.278674  396583 main.go:141] libmachine: Using API Version  1
	I0510 17:53:10.278692  396583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:53:10.279133  396583 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:53:10.279330  396583 main.go:141] libmachine: (addons-573653) Calling .DriverName
	I0510 17:53:10.281036  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:53:10.281213  396583 main.go:141] libmachine: (addons-573653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:f2:75", ip: ""} in network mk-addons-573653: {Iface:virbr1 ExpiryTime:2025-05-10 18:52:35 +0000 UTC Type:0 Mac:52:54:00:68:f2:75 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:addons-573653 Clientid:01:52:54:00:68:f2:75}
	I0510 17:53:10.281239  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined IP address 192.168.39.219 and MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:53:10.281500  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHPort
	I0510 17:53:10.281571  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHPort
	I0510 17:53:10.281975  396583 main.go:141] libmachine: (addons-573653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:f2:75", ip: ""} in network mk-addons-573653: {Iface:virbr1 ExpiryTime:2025-05-10 18:52:35 +0000 UTC Type:0 Mac:52:54:00:68:f2:75 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:addons-573653 Clientid:01:52:54:00:68:f2:75}
	I0510 17:53:10.281994  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined IP address 192.168.39.219 and MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:53:10.282025  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHKeyPath
	I0510 17:53:10.282405  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHKeyPath
	I0510 17:53:10.282465  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHPort
	I0510 17:53:10.282953  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHUsername
	I0510 17:53:10.283013  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHUsername
	I0510 17:53:10.283063  396583 main.go:141] libmachine: (addons-573653) Calling .DriverName
	I0510 17:53:10.283109  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHKeyPath
	I0510 17:53:10.283502  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHUsername
	I0510 17:53:10.283560  396583 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/addons-573653/id_rsa Username:docker}
	I0510 17:53:10.284236  396583 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/addons-573653/id_rsa Username:docker}
	I0510 17:53:10.284795  396583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35353
	I0510 17:53:10.285310  396583 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/addons-573653/id_rsa Username:docker}
	I0510 17:53:10.286389  396583 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:53:10.287123  396583 main.go:141] libmachine: Using API Version  1
	I0510 17:53:10.287154  396583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:53:10.287264  396583 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.1
	I0510 17:53:10.287988  396583 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:53:10.288838  396583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 17:53:10.288894  396583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:53:10.289291  396583 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0510 17:53:10.289326  396583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0510 17:53:10.289355  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHHostname
	I0510 17:53:10.296809  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHPort
	I0510 17:53:10.296819  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:53:10.296850  396583 main.go:141] libmachine: (addons-573653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:f2:75", ip: ""} in network mk-addons-573653: {Iface:virbr1 ExpiryTime:2025-05-10 18:52:35 +0000 UTC Type:0 Mac:52:54:00:68:f2:75 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:addons-573653 Clientid:01:52:54:00:68:f2:75}
	I0510 17:53:10.296886  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined IP address 192.168.39.219 and MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:53:10.297117  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHKeyPath
	I0510 17:53:10.297335  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHUsername
	I0510 17:53:10.297499  396583 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/addons-573653/id_rsa Username:docker}
	I0510 17:53:10.297994  396583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35945
	I0510 17:53:10.298718  396583 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:53:10.299375  396583 main.go:141] libmachine: Using API Version  1
	I0510 17:53:10.299396  396583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:53:10.299479  396583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39883
	I0510 17:53:10.300048  396583 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:53:10.300578  396583 main.go:141] libmachine: Using API Version  1
	I0510 17:53:10.300595  396583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:53:10.301270  396583 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:53:10.301317  396583 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:53:10.301369  396583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37413
	I0510 17:53:10.301751  396583 main.go:141] libmachine: (addons-573653) Calling .GetState
	I0510 17:53:10.301795  396583 main.go:141] libmachine: (addons-573653) Calling .GetState
	I0510 17:53:10.304703  396583 main.go:141] libmachine: (addons-573653) Calling .DriverName
	I0510 17:53:10.304780  396583 main.go:141] libmachine: (addons-573653) Calling .DriverName
	I0510 17:53:10.306343  396583 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:53:10.306830  396583 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0510 17:53:10.307024  396583 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0510 17:53:10.307567  396583 main.go:141] libmachine: Using API Version  1
	I0510 17:53:10.307595  396583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:53:10.308154  396583 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:53:10.308286  396583 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0510 17:53:10.308308  396583 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0510 17:53:10.308331  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHHostname
	I0510 17:53:10.308375  396583 main.go:141] libmachine: (addons-573653) Calling .GetState
	I0510 17:53:10.308407  396583 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0510 17:53:10.308416  396583 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0510 17:53:10.308430  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHHostname
	I0510 17:53:10.308789  396583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38723
	I0510 17:53:10.309368  396583 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:53:10.309839  396583 main.go:141] libmachine: Using API Version  1
	I0510 17:53:10.309859  396583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:53:10.310313  396583 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:53:10.310534  396583 main.go:141] libmachine: (addons-573653) Calling .GetState
	I0510 17:53:10.312931  396583 main.go:141] libmachine: (addons-573653) Calling .DriverName
	I0510 17:53:10.313735  396583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34091
	I0510 17:53:10.314000  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:53:10.314057  396583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46317
	I0510 17:53:10.314317  396583 main.go:141] libmachine: (addons-573653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:f2:75", ip: ""} in network mk-addons-573653: {Iface:virbr1 ExpiryTime:2025-05-10 18:52:35 +0000 UTC Type:0 Mac:52:54:00:68:f2:75 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:addons-573653 Clientid:01:52:54:00:68:f2:75}
	I0510 17:53:10.314343  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined IP address 192.168.39.219 and MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:53:10.314441  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHPort
	I0510 17:53:10.314614  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHKeyPath
	I0510 17:53:10.314710  396583 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:53:10.314790  396583 main.go:141] libmachine: (addons-573653) Calling .DriverName
	I0510 17:53:10.315023  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHUsername
	I0510 17:53:10.315224  396583 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/addons-573653/id_rsa Username:docker}
	I0510 17:53:10.316269  396583 main.go:141] libmachine: Using API Version  1
	I0510 17:53:10.316291  396583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:53:10.315466  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:53:10.315713  396583 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:53:10.316403  396583 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0510 17:53:10.316710  396583 main.go:141] libmachine: (addons-573653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:f2:75", ip: ""} in network mk-addons-573653: {Iface:virbr1 ExpiryTime:2025-05-10 18:52:35 +0000 UTC Type:0 Mac:52:54:00:68:f2:75 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:addons-573653 Clientid:01:52:54:00:68:f2:75}
	I0510 17:53:10.316743  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined IP address 192.168.39.219 and MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:53:10.316954  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHPort
	I0510 17:53:10.317172  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHKeyPath
	I0510 17:53:10.317321  396583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42201
	I0510 17:53:10.317354  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHUsername
	I0510 17:53:10.317811  396583 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0510 17:53:10.317878  396583 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0510 17:53:10.317891  396583 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0510 17:53:10.317914  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHHostname
	I0510 17:53:10.318741  396583 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:53:10.318882  396583 main.go:141] libmachine: Using API Version  1
	I0510 17:53:10.318922  396583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:53:10.319097  396583 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/addons-573653/id_rsa Username:docker}
	I0510 17:53:10.319592  396583 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0510 17:53:10.319611  396583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0510 17:53:10.319628  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHHostname
	I0510 17:53:10.319703  396583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43945
	I0510 17:53:10.319707  396583 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:53:10.320246  396583 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:53:10.320271  396583 main.go:141] libmachine: (addons-573653) Calling .GetState
	I0510 17:53:10.320594  396583 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:53:10.320756  396583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 17:53:10.320796  396583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:53:10.321156  396583 main.go:141] libmachine: Using API Version  1
	I0510 17:53:10.321175  396583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:53:10.321585  396583 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:53:10.321734  396583 main.go:141] libmachine: Using API Version  1
	I0510 17:53:10.321753  396583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:53:10.321814  396583 main.go:141] libmachine: (addons-573653) Calling .GetState
	I0510 17:53:10.322178  396583 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:53:10.322406  396583 main.go:141] libmachine: (addons-573653) Calling .GetState
	I0510 17:53:10.323097  396583 main.go:141] libmachine: (addons-573653) Calling .DriverName
	I0510 17:53:10.323516  396583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45213
	I0510 17:53:10.324207  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:53:10.324284  396583 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:53:10.324790  396583 main.go:141] libmachine: (addons-573653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:f2:75", ip: ""} in network mk-addons-573653: {Iface:virbr1 ExpiryTime:2025-05-10 18:52:35 +0000 UTC Type:0 Mac:52:54:00:68:f2:75 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:addons-573653 Clientid:01:52:54:00:68:f2:75}
	I0510 17:53:10.324817  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined IP address 192.168.39.219 and MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:53:10.324941  396583 main.go:141] libmachine: Using API Version  1
	I0510 17:53:10.324959  396583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:53:10.325282  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHPort
	I0510 17:53:10.325327  396583 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:53:10.325501  396583 main.go:141] libmachine: (addons-573653) Calling .DriverName
	I0510 17:53:10.325519  396583 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0510 17:53:10.325564  396583 main.go:141] libmachine: (addons-573653) Calling .GetState
	I0510 17:53:10.325605  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHKeyPath
	I0510 17:53:10.325891  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHUsername
	I0510 17:53:10.325963  396583 main.go:141] libmachine: (addons-573653) Calling .DriverName
	I0510 17:53:10.326147  396583 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/addons-573653/id_rsa Username:docker}
	I0510 17:53:10.326906  396583 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0510 17:53:10.326928  396583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0510 17:53:10.326956  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHHostname
	I0510 17:53:10.327559  396583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40511
	I0510 17:53:10.327820  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:53:10.327968  396583 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:53:10.328345  396583 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.39.0
	I0510 17:53:10.328443  396583 main.go:141] libmachine: Using API Version  1
	I0510 17:53:10.328459  396583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:53:10.328511  396583 out.go:177]   - Using image docker.io/busybox:stable
	I0510 17:53:10.328554  396583 main.go:141] libmachine: (addons-573653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:f2:75", ip: ""} in network mk-addons-573653: {Iface:virbr1 ExpiryTime:2025-05-10 18:52:35 +0000 UTC Type:0 Mac:52:54:00:68:f2:75 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:addons-573653 Clientid:01:52:54:00:68:f2:75}
	I0510 17:53:10.328576  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined IP address 192.168.39.219 and MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:53:10.328529  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHPort
	I0510 17:53:10.328774  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHKeyPath
	I0510 17:53:10.328871  396583 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:53:10.328928  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHUsername
	I0510 17:53:10.329037  396583 main.go:141] libmachine: (addons-573653) Calling .GetState
	I0510 17:53:10.329088  396583 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/addons-573653/id_rsa Username:docker}
	I0510 17:53:10.330292  396583 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0510 17:53:10.330312  396583 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0510 17:53:10.330333  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHHostname
	I0510 17:53:10.330911  396583 main.go:141] libmachine: (addons-573653) Calling .DriverName
	I0510 17:53:10.331169  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:53:10.331436  396583 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0510 17:53:10.331454  396583 main.go:141] libmachine: (addons-573653) Calling .DriverName
	I0510 17:53:10.331784  396583 main.go:141] libmachine: (addons-573653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:f2:75", ip: ""} in network mk-addons-573653: {Iface:virbr1 ExpiryTime:2025-05-10 18:52:35 +0000 UTC Type:0 Mac:52:54:00:68:f2:75 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:addons-573653 Clientid:01:52:54:00:68:f2:75}
	I0510 17:53:10.331807  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined IP address 192.168.39.219 and MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:53:10.331999  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHPort
	I0510 17:53:10.332173  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHKeyPath
	I0510 17:53:10.332322  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHUsername
	I0510 17:53:10.332464  396583 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/addons-573653/id_rsa Username:docker}
	I0510 17:53:10.332884  396583 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0510 17:53:10.332953  396583 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0510 17:53:10.332967  396583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0510 17:53:10.332984  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHHostname
	I0510 17:53:10.333053  396583 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0510 17:53:10.333633  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:53:10.333959  396583 main.go:141] libmachine: (addons-573653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:f2:75", ip: ""} in network mk-addons-573653: {Iface:virbr1 ExpiryTime:2025-05-10 18:52:35 +0000 UTC Type:0 Mac:52:54:00:68:f2:75 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:addons-573653 Clientid:01:52:54:00:68:f2:75}
	I0510 17:53:10.333995  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined IP address 192.168.39.219 and MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:53:10.334098  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHPort
	I0510 17:53:10.334281  396583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35421
	I0510 17:53:10.334301  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHKeyPath
	I0510 17:53:10.334454  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHUsername
	I0510 17:53:10.334605  396583 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/addons-573653/id_rsa Username:docker}
	I0510 17:53:10.335548  396583 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:53:10.335941  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:53:10.335954  396583 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0510 17:53:10.336104  396583 main.go:141] libmachine: Using API Version  1
	I0510 17:53:10.336362  396583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:53:10.336985  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHPort
	I0510 17:53:10.337034  396583 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:53:10.336988  396583 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0510 17:53:10.337192  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHKeyPath
	I0510 17:53:10.337252  396583 main.go:141] libmachine: (addons-573653) Calling .GetState
	I0510 17:53:10.336292  396583 main.go:141] libmachine: (addons-573653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:f2:75", ip: ""} in network mk-addons-573653: {Iface:virbr1 ExpiryTime:2025-05-10 18:52:35 +0000 UTC Type:0 Mac:52:54:00:68:f2:75 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:addons-573653 Clientid:01:52:54:00:68:f2:75}
	I0510 17:53:10.337389  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined IP address 192.168.39.219 and MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:53:10.338848  396583 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0510 17:53:10.338908  396583 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0510 17:53:10.338962  396583 main.go:141] libmachine: (addons-573653) Calling .DriverName
	I0510 17:53:10.337405  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHUsername
	I0510 17:53:10.339504  396583 main.go:141] libmachine: Making call to close driver server
	I0510 17:53:10.339521  396583 main.go:141] libmachine: (addons-573653) Calling .Close
	I0510 17:53:10.339831  396583 main.go:141] libmachine: (addons-573653) DBG | Closing plugin on server side
	I0510 17:53:10.339864  396583 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:53:10.339872  396583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:53:10.339880  396583 main.go:141] libmachine: Making call to close driver server
	I0510 17:53:10.339894  396583 main.go:141] libmachine: (addons-573653) Calling .Close
	I0510 17:53:10.340032  396583 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/addons-573653/id_rsa Username:docker}
	I0510 17:53:10.340469  396583 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:53:10.340488  396583 main.go:141] libmachine: Making call to close connection to plugin binary
	W0510 17:53:10.340569  396583 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0510 17:53:10.341569  396583 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0510 17:53:10.341593  396583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0510 17:53:10.341611  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHHostname
	I0510 17:53:10.343751  396583 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0510 17:53:10.344540  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:53:10.344912  396583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33001
	I0510 17:53:10.344978  396583 main.go:141] libmachine: (addons-573653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:f2:75", ip: ""} in network mk-addons-573653: {Iface:virbr1 ExpiryTime:2025-05-10 18:52:35 +0000 UTC Type:0 Mac:52:54:00:68:f2:75 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:addons-573653 Clientid:01:52:54:00:68:f2:75}
	I0510 17:53:10.345001  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined IP address 192.168.39.219 and MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:53:10.345155  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHPort
	I0510 17:53:10.345317  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHKeyPath
	I0510 17:53:10.345522  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHUsername
	I0510 17:53:10.345682  396583 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:53:10.345695  396583 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/addons-573653/id_rsa Username:docker}
	I0510 17:53:10.346091  396583 main.go:141] libmachine: Using API Version  1
	I0510 17:53:10.346119  396583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:53:10.346442  396583 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:53:10.346592  396583 main.go:141] libmachine: (addons-573653) Calling .GetState
	I0510 17:53:10.346942  396583 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0510 17:53:10.348044  396583 main.go:141] libmachine: (addons-573653) Calling .DriverName
	I0510 17:53:10.348261  396583 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0510 17:53:10.348275  396583 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0510 17:53:10.348292  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHHostname
	I0510 17:53:10.349733  396583 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0510 17:53:10.351170  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:53:10.351170  396583 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0510 17:53:10.351740  396583 main.go:141] libmachine: (addons-573653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:f2:75", ip: ""} in network mk-addons-573653: {Iface:virbr1 ExpiryTime:2025-05-10 18:52:35 +0000 UTC Type:0 Mac:52:54:00:68:f2:75 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:addons-573653 Clientid:01:52:54:00:68:f2:75}
	I0510 17:53:10.351773  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined IP address 192.168.39.219 and MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:53:10.352011  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHPort
	I0510 17:53:10.352156  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHKeyPath
	I0510 17:53:10.352315  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHUsername
	I0510 17:53:10.352455  396583 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/addons-573653/id_rsa Username:docker}
	I0510 17:53:10.354018  396583 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0510 17:53:10.355342  396583 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0510 17:53:10.355358  396583 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0510 17:53:10.355377  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHHostname
	I0510 17:53:10.358227  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:53:10.358574  396583 main.go:141] libmachine: (addons-573653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:f2:75", ip: ""} in network mk-addons-573653: {Iface:virbr1 ExpiryTime:2025-05-10 18:52:35 +0000 UTC Type:0 Mac:52:54:00:68:f2:75 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:addons-573653 Clientid:01:52:54:00:68:f2:75}
	I0510 17:53:10.358593  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined IP address 192.168.39.219 and MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:53:10.358782  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHPort
	I0510 17:53:10.358918  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHKeyPath
	I0510 17:53:10.359006  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHUsername
	I0510 17:53:10.359071  396583 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/addons-573653/id_rsa Username:docker}
	W0510 17:53:10.481302  396583 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:32800->192.168.39.219:22: read: connection reset by peer
	I0510 17:53:10.481355  396583 retry.go:31] will retry after 158.042622ms: ssh: handshake failed: read tcp 192.168.39.1:32800->192.168.39.219:22: read: connection reset by peer
	W0510 17:53:10.490055  396583 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:32814->192.168.39.219:22: read: connection reset by peer
	I0510 17:53:10.490089  396583 retry.go:31] will retry after 126.785386ms: ssh: handshake failed: read tcp 192.168.39.1:32814->192.168.39.219:22: read: connection reset by peer
	I0510 17:53:10.733997  396583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0510 17:53:10.797234  396583 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0510 17:53:10.797261  396583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0510 17:53:10.818534  396583 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0510 17:53:10.818585  396583 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0510 17:53:10.832737  396583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0510 17:53:10.836611  396583 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 17:53:10.836700  396583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.33.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0510 17:53:10.838822  396583 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0510 17:53:10.838843  396583 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0510 17:53:10.902458  396583 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0510 17:53:10.902485  396583 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0510 17:53:10.904585  396583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0510 17:53:10.906353  396583 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0510 17:53:10.906378  396583 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0510 17:53:10.908667  396583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0510 17:53:10.908824  396583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0510 17:53:10.913092  396583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0510 17:53:10.965319  396583 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0510 17:53:10.965346  396583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0510 17:53:10.981634  396583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0510 17:53:11.020537  396583 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0510 17:53:11.020575  396583 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0510 17:53:11.027122  396583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0510 17:53:11.067321  396583 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0510 17:53:11.067356  396583 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0510 17:53:11.085639  396583 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0510 17:53:11.085672  396583 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0510 17:53:11.144668  396583 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0510 17:53:11.144709  396583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0510 17:53:11.174019  396583 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0510 17:53:11.174063  396583 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0510 17:53:11.205371  396583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0510 17:53:11.383040  396583 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0510 17:53:11.383078  396583 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0510 17:53:11.444605  396583 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0510 17:53:11.444650  396583 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0510 17:53:11.525025  396583 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0510 17:53:11.525057  396583 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0510 17:53:11.580850  396583 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0510 17:53:11.580885  396583 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0510 17:53:11.720030  396583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0510 17:53:11.871120  396583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0510 17:53:11.983325  396583 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0510 17:53:11.983366  396583 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0510 17:53:12.088734  396583 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0510 17:53:12.088765  396583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0510 17:53:12.459077  396583 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0510 17:53:12.459122  396583 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0510 17:53:12.676265  396583 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0510 17:53:12.676300  396583 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0510 17:53:12.873267  396583 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.139223589s)
	I0510 17:53:12.873361  396583 main.go:141] libmachine: Making call to close driver server
	I0510 17:53:12.873376  396583 main.go:141] libmachine: (addons-573653) Calling .Close
	I0510 17:53:12.873723  396583 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:53:12.873748  396583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:53:12.873765  396583 main.go:141] libmachine: Making call to close driver server
	I0510 17:53:12.873775  396583 main.go:141] libmachine: (addons-573653) Calling .Close
	I0510 17:53:12.874188  396583 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:53:12.874212  396583 main.go:141] libmachine: (addons-573653) DBG | Closing plugin on server side
	I0510 17:53:12.874215  396583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:53:12.944127  396583 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0510 17:53:12.944152  396583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0510 17:53:13.014854  396583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0510 17:53:13.263335  396583 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0510 17:53:13.263371  396583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0510 17:53:13.738197  396583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0510 17:53:13.770487  396583 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0510 17:53:13.770519  396583 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0510 17:53:14.594221  396583 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0510 17:53:14.594247  396583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0510 17:53:15.058013  396583 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0510 17:53:15.058040  396583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0510 17:53:15.898634  396583 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.065848291s)
	I0510 17:53:15.898705  396583 main.go:141] libmachine: Making call to close driver server
	I0510 17:53:15.898720  396583 main.go:141] libmachine: (addons-573653) Calling .Close
	I0510 17:53:15.898767  396583 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.33.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.062027525s)
	I0510 17:53:15.898852  396583 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0510 17:53:15.898806  396583 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.062162997s)
	I0510 17:53:15.899077  396583 main.go:141] libmachine: (addons-573653) DBG | Closing plugin on server side
	I0510 17:53:15.899126  396583 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:53:15.899144  396583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:53:15.899170  396583 main.go:141] libmachine: Making call to close driver server
	I0510 17:53:15.899182  396583 main.go:141] libmachine: (addons-573653) Calling .Close
	I0510 17:53:15.899874  396583 node_ready.go:35] waiting up to 6m0s for node "addons-573653" to be "Ready" ...
	I0510 17:53:15.900993  396583 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:53:15.901008  396583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:53:15.901027  396583 main.go:141] libmachine: (addons-573653) DBG | Closing plugin on server side
	I0510 17:53:15.918362  396583 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0510 17:53:15.918397  396583 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0510 17:53:15.928430  396583 node_ready.go:49] node "addons-573653" is "Ready"
	I0510 17:53:15.928469  396583 node_ready.go:38] duration metric: took 28.555999ms for node "addons-573653" to be "Ready" ...
	I0510 17:53:15.928490  396583 api_server.go:52] waiting for apiserver process to appear ...
	I0510 17:53:15.928546  396583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 17:53:16.342720  396583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0510 17:53:16.578630  396583 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-573653" context rescaled to 1 replicas
	I0510 17:53:17.374726  396583 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0510 17:53:17.374766  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHHostname
	I0510 17:53:17.378212  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:53:17.378620  396583 main.go:141] libmachine: (addons-573653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:f2:75", ip: ""} in network mk-addons-573653: {Iface:virbr1 ExpiryTime:2025-05-10 18:52:35 +0000 UTC Type:0 Mac:52:54:00:68:f2:75 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:addons-573653 Clientid:01:52:54:00:68:f2:75}
	I0510 17:53:17.378659  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined IP address 192.168.39.219 and MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:53:17.378886  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHPort
	I0510 17:53:17.379133  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHKeyPath
	I0510 17:53:17.379415  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHUsername
	I0510 17:53:17.379604  396583 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/addons-573653/id_rsa Username:docker}
	I0510 17:53:17.749997  396583 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0510 17:53:17.906828  396583 addons.go:238] Setting addon gcp-auth=true in "addons-573653"
	I0510 17:53:17.906915  396583 host.go:66] Checking if "addons-573653" exists ...
	I0510 17:53:17.907466  396583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 17:53:17.907515  396583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:53:17.924349  396583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37741
	I0510 17:53:17.924860  396583 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:53:17.925481  396583 main.go:141] libmachine: Using API Version  1
	I0510 17:53:17.925514  396583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:53:17.925911  396583 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:53:17.926460  396583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 17:53:17.926496  396583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:53:17.942459  396583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40565
	I0510 17:53:17.942939  396583 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:53:17.943444  396583 main.go:141] libmachine: Using API Version  1
	I0510 17:53:17.943474  396583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:53:17.943921  396583 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:53:17.944165  396583 main.go:141] libmachine: (addons-573653) Calling .GetState
	I0510 17:53:17.945752  396583 main.go:141] libmachine: (addons-573653) Calling .DriverName
	I0510 17:53:17.946026  396583 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0510 17:53:17.946062  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHHostname
	I0510 17:53:17.949153  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:53:17.949597  396583 main.go:141] libmachine: (addons-573653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:f2:75", ip: ""} in network mk-addons-573653: {Iface:virbr1 ExpiryTime:2025-05-10 18:52:35 +0000 UTC Type:0 Mac:52:54:00:68:f2:75 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:addons-573653 Clientid:01:52:54:00:68:f2:75}
	I0510 17:53:17.949632  396583 main.go:141] libmachine: (addons-573653) DBG | domain addons-573653 has defined IP address 192.168.39.219 and MAC address 52:54:00:68:f2:75 in network mk-addons-573653
	I0510 17:53:17.949862  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHPort
	I0510 17:53:17.950086  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHKeyPath
	I0510 17:53:17.950276  396583 main.go:141] libmachine: (addons-573653) Calling .GetSSHUsername
	I0510 17:53:17.950454  396583 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/addons-573653/id_rsa Username:docker}
	I0510 17:53:17.995791  396583 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.091164929s)
	I0510 17:53:17.995816  396583 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.087110809s)
	I0510 17:53:17.995839  396583 main.go:141] libmachine: Making call to close driver server
	I0510 17:53:17.995848  396583 main.go:141] libmachine: (addons-573653) Calling .Close
	I0510 17:53:17.995858  396583 main.go:141] libmachine: Making call to close driver server
	I0510 17:53:17.995888  396583 main.go:141] libmachine: (addons-573653) Calling .Close
	I0510 17:53:17.995954  396583 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.087082665s)
	I0510 17:53:17.995988  396583 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.08287009s)
	I0510 17:53:17.996043  396583 main.go:141] libmachine: Making call to close driver server
	I0510 17:53:17.996095  396583 main.go:141] libmachine: (addons-573653) Calling .Close
	I0510 17:53:17.996075  396583 main.go:141] libmachine: Making call to close driver server
	I0510 17:53:17.996193  396583 main.go:141] libmachine: (addons-573653) Calling .Close
	I0510 17:53:17.996295  396583 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:53:17.996296  396583 main.go:141] libmachine: (addons-573653) DBG | Closing plugin on server side
	I0510 17:53:17.996308  396583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:53:17.996317  396583 main.go:141] libmachine: Making call to close driver server
	I0510 17:53:17.996324  396583 main.go:141] libmachine: (addons-573653) Calling .Close
	I0510 17:53:17.996345  396583 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:53:17.996353  396583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:53:17.996362  396583 main.go:141] libmachine: Making call to close driver server
	I0510 17:53:17.996370  396583 main.go:141] libmachine: (addons-573653) Calling .Close
	I0510 17:53:17.996534  396583 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:53:17.996554  396583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:53:17.996592  396583 main.go:141] libmachine: (addons-573653) DBG | Closing plugin on server side
	I0510 17:53:17.996618  396583 main.go:141] libmachine: (addons-573653) DBG | Closing plugin on server side
	I0510 17:53:17.996618  396583 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:53:17.996629  396583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:53:17.996647  396583 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:53:17.996655  396583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:53:17.996663  396583 main.go:141] libmachine: Making call to close driver server
	I0510 17:53:17.996670  396583 main.go:141] libmachine: (addons-573653) Calling .Close
	I0510 17:53:17.998194  396583 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:53:17.998201  396583 main.go:141] libmachine: (addons-573653) DBG | Closing plugin on server side
	I0510 17:53:17.998211  396583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:53:17.998217  396583 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:53:17.998229  396583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:53:17.998236  396583 main.go:141] libmachine: Making call to close driver server
	I0510 17:53:17.998244  396583 main.go:141] libmachine: (addons-573653) Calling .Close
	I0510 17:53:17.998551  396583 main.go:141] libmachine: (addons-573653) DBG | Closing plugin on server side
	I0510 17:53:17.998572  396583 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:53:17.998581  396583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:53:18.350535  396583 main.go:141] libmachine: Making call to close driver server
	I0510 17:53:18.350568  396583 main.go:141] libmachine: (addons-573653) Calling .Close
	I0510 17:53:18.350892  396583 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:53:18.350915  396583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:53:19.736359  396583 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.754674747s)
	I0510 17:53:19.736434  396583 main.go:141] libmachine: Making call to close driver server
	I0510 17:53:19.736439  396583 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.709271023s)
	I0510 17:53:19.736484  396583 main.go:141] libmachine: Making call to close driver server
	I0510 17:53:19.736507  396583 main.go:141] libmachine: (addons-573653) Calling .Close
	I0510 17:53:19.736448  396583 main.go:141] libmachine: (addons-573653) Calling .Close
	I0510 17:53:19.736539  396583 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.016482466s)
	I0510 17:53:19.736505  396583 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (8.531096148s)
	I0510 17:53:19.736574  396583 main.go:141] libmachine: Making call to close driver server
	I0510 17:53:19.736585  396583 main.go:141] libmachine: (addons-573653) Calling .Close
	I0510 17:53:19.736592  396583 main.go:141] libmachine: Making call to close driver server
	I0510 17:53:19.736604  396583 main.go:141] libmachine: (addons-573653) Calling .Close
	I0510 17:53:19.736660  396583 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.865506128s)
	I0510 17:53:19.736685  396583 main.go:141] libmachine: Making call to close driver server
	I0510 17:53:19.736697  396583 main.go:141] libmachine: (addons-573653) Calling .Close
	I0510 17:53:19.736703  396583 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.721811776s)
	I0510 17:53:19.736725  396583 main.go:141] libmachine: Making call to close driver server
	I0510 17:53:19.736735  396583 main.go:141] libmachine: (addons-573653) Calling .Close
	I0510 17:53:19.737133  396583 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:53:19.737147  396583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:53:19.737161  396583 main.go:141] libmachine: (addons-573653) DBG | Closing plugin on server side
	I0510 17:53:19.737174  396583 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:53:19.737182  396583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:53:19.737182  396583 main.go:141] libmachine: (addons-573653) DBG | Closing plugin on server side
	I0510 17:53:19.737189  396583 main.go:141] libmachine: Making call to close driver server
	I0510 17:53:19.737197  396583 main.go:141] libmachine: (addons-573653) Calling .Close
	I0510 17:53:19.737208  396583 main.go:141] libmachine: (addons-573653) DBG | Closing plugin on server side
	I0510 17:53:19.737225  396583 main.go:141] libmachine: (addons-573653) DBG | Closing plugin on server side
	I0510 17:53:19.737240  396583 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:53:19.737247  396583 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:53:19.737253  396583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:53:19.737255  396583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:53:19.737264  396583 main.go:141] libmachine: Making call to close driver server
	I0510 17:53:19.737264  396583 main.go:141] libmachine: Making call to close driver server
	I0510 17:53:19.737269  396583 main.go:141] libmachine: (addons-573653) DBG | Closing plugin on server side
	I0510 17:53:19.737272  396583 main.go:141] libmachine: (addons-573653) Calling .Close
	I0510 17:53:19.737275  396583 main.go:141] libmachine: (addons-573653) Calling .Close
	I0510 17:53:19.737294  396583 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:53:19.737301  396583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:53:19.737307  396583 main.go:141] libmachine: Making call to close driver server
	I0510 17:53:19.737256  396583 main.go:141] libmachine: (addons-573653) DBG | Closing plugin on server side
	I0510 17:53:19.737669  396583 main.go:141] libmachine: (addons-573653) DBG | Closing plugin on server side
	I0510 17:53:19.737683  396583 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:53:19.737696  396583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:53:19.737699  396583 main.go:141] libmachine: (addons-573653) DBG | Closing plugin on server side
	I0510 17:53:19.737709  396583 addons.go:479] Verifying addon registry=true in "addons-573653"
	I0510 17:53:19.737722  396583 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:53:19.737729  396583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:53:19.739098  396583 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:53:19.739158  396583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:53:19.739359  396583 main.go:141] libmachine: Making call to close driver server
	I0510 17:53:19.739380  396583 main.go:141] libmachine: (addons-573653) Calling .Close
	I0510 17:53:19.737687  396583 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:53:19.739399  396583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:53:19.739410  396583 main.go:141] libmachine: Making call to close driver server
	I0510 17:53:19.739417  396583 main.go:141] libmachine: (addons-573653) Calling .Close
	I0510 17:53:19.737313  396583 main.go:141] libmachine: (addons-573653) Calling .Close
	I0510 17:53:19.739796  396583 main.go:141] libmachine: (addons-573653) DBG | Closing plugin on server side
	I0510 17:53:19.739820  396583 main.go:141] libmachine: (addons-573653) DBG | Closing plugin on server side
	I0510 17:53:19.739846  396583 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:53:19.739849  396583 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:53:19.739857  396583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:53:19.739860  396583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:53:19.739867  396583 addons.go:479] Verifying addon ingress=true in "addons-573653"
	I0510 17:53:19.739867  396583 addons.go:479] Verifying addon metrics-server=true in "addons-573653"
	I0510 17:53:19.740787  396583 main.go:141] libmachine: (addons-573653) DBG | Closing plugin on server side
	I0510 17:53:19.740801  396583 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:53:19.740812  396583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:53:19.741562  396583 out.go:177] * Verifying ingress addon...
	I0510 17:53:19.741570  396583 out.go:177] * Verifying registry addon...
	I0510 17:53:19.741630  396583 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-573653 service yakd-dashboard -n yakd-dashboard
	
	I0510 17:53:19.743807  396583 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0510 17:53:19.743819  396583 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0510 17:53:19.751378  396583 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0510 17:53:19.751411  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:19.751485  396583 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0510 17:53:19.751497  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:19.767221  396583 main.go:141] libmachine: Making call to close driver server
	I0510 17:53:19.767263  396583 main.go:141] libmachine: (addons-573653) Calling .Close
	I0510 17:53:19.767718  396583 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:53:19.767770  396583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:53:19.767798  396583 main.go:141] libmachine: (addons-573653) DBG | Closing plugin on server side
	I0510 17:53:20.253048  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:20.253122  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:20.811675  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:20.812049  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:21.222126  396583 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.483874153s)
	I0510 17:53:21.222176  396583 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.293599229s)
	W0510 17:53:21.222190  396583 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0510 17:53:21.222218  396583 api_server.go:72] duration metric: took 11.058991159s to wait for apiserver process to appear ...
	I0510 17:53:21.222227  396583 api_server.go:88] waiting for apiserver healthz status ...
	I0510 17:53:21.222222  396583 retry.go:31] will retry after 298.334119ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0510 17:53:21.222251  396583 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8443/healthz ...
	I0510 17:53:21.233947  396583 api_server.go:279] https://192.168.39.219:8443/healthz returned 200:
	ok
	I0510 17:53:21.250832  396583 api_server.go:141] control plane version: v1.33.0
	I0510 17:53:21.250872  396583 api_server.go:131] duration metric: took 28.63646ms to wait for apiserver health ...
	I0510 17:53:21.250887  396583 system_pods.go:43] waiting for kube-system pods to appear ...
	I0510 17:53:21.312441  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:21.312493  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:21.312775  396583 system_pods.go:59] 14 kube-system pods found
	I0510 17:53:21.312806  396583 system_pods.go:61] "amd-gpu-device-plugin-6bgxv" [f404a532-f8a5-4910-b4fd-829feef931fb] Running
	I0510 17:53:21.312821  396583 system_pods.go:61] "coredns-674b8bbfcf-ng4h8" [27c21a48-fba1-4ce8-b143-5060ef2e095d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0510 17:53:21.312831  396583 system_pods.go:61] "coredns-674b8bbfcf-zs6db" [01153744-0cd3-45c5-849d-2f8fdae71c22] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0510 17:53:21.312841  396583 system_pods.go:61] "etcd-addons-573653" [7baafe91-a65d-4148-af0d-abcd38849afa] Running
	I0510 17:53:21.312849  396583 system_pods.go:61] "kube-apiserver-addons-573653" [e65696e1-eab7-4f18-a520-76124c94f83b] Running
	I0510 17:53:21.312857  396583 system_pods.go:61] "kube-controller-manager-addons-573653" [1907f98d-c473-4ee7-9073-84ec72630feb] Running
	I0510 17:53:21.312866  396583 system_pods.go:61] "kube-ingress-dns-minikube" [15d89e41-22de-487e-8436-801252358cf7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0510 17:53:21.312875  396583 system_pods.go:61] "kube-proxy-vhxfm" [2e9851f7-f262-4178-a042-dc33b7249403] Running
	I0510 17:53:21.312880  396583 system_pods.go:61] "kube-scheduler-addons-573653" [1ff03158-1e5b-42c4-88fd-ebb7e0119d80] Running
	I0510 17:53:21.312888  396583 system_pods.go:61] "metrics-server-7fbb699795-4svvf" [e6310a8d-fbdc-463e-af85-5ad9c1e1cf86] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0510 17:53:21.312911  396583 system_pods.go:61] "nvidia-device-plugin-daemonset-2qlw4" [3edbc82d-3c0a-4ac4-b43c-ac2363e24f12] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0510 17:53:21.312925  396583 system_pods.go:61] "registry-694bd45846-w5zm2" [2792955b-c0bc-4f02-93dd-2d7bb14fb09b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0510 17:53:21.312934  396583 system_pods.go:61] "registry-proxy-wkzrw" [8d763450-a4fa-4fe8-8481-43644617e2bb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0510 17:53:21.312945  396583 system_pods.go:61] "storage-provisioner" [98a4f21e-497b-4a9a-b4ee-a8c30688ea41] Running
	I0510 17:53:21.312968  396583 system_pods.go:74] duration metric: took 62.062767ms to wait for pod list to return data ...
	I0510 17:53:21.312983  396583 default_sa.go:34] waiting for default service account to be created ...
	I0510 17:53:21.381282  396583 default_sa.go:45] found service account: "default"
	I0510 17:53:21.381311  396583 default_sa.go:55] duration metric: took 68.319576ms for default service account to be created ...
	I0510 17:53:21.381324  396583 system_pods.go:116] waiting for k8s-apps to be running ...
	I0510 17:53:21.446246  396583 system_pods.go:86] 16 kube-system pods found
	I0510 17:53:21.446291  396583 system_pods.go:89] "amd-gpu-device-plugin-6bgxv" [f404a532-f8a5-4910-b4fd-829feef931fb] Running
	I0510 17:53:21.446306  396583 system_pods.go:89] "coredns-674b8bbfcf-ng4h8" [27c21a48-fba1-4ce8-b143-5060ef2e095d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0510 17:53:21.446314  396583 system_pods.go:89] "coredns-674b8bbfcf-zs6db" [01153744-0cd3-45c5-849d-2f8fdae71c22] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0510 17:53:21.446323  396583 system_pods.go:89] "etcd-addons-573653" [7baafe91-a65d-4148-af0d-abcd38849afa] Running
	I0510 17:53:21.446329  396583 system_pods.go:89] "kube-apiserver-addons-573653" [e65696e1-eab7-4f18-a520-76124c94f83b] Running
	I0510 17:53:21.446336  396583 system_pods.go:89] "kube-controller-manager-addons-573653" [1907f98d-c473-4ee7-9073-84ec72630feb] Running
	I0510 17:53:21.446347  396583 system_pods.go:89] "kube-ingress-dns-minikube" [15d89e41-22de-487e-8436-801252358cf7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0510 17:53:21.446357  396583 system_pods.go:89] "kube-proxy-vhxfm" [2e9851f7-f262-4178-a042-dc33b7249403] Running
	I0510 17:53:21.446365  396583 system_pods.go:89] "kube-scheduler-addons-573653" [1ff03158-1e5b-42c4-88fd-ebb7e0119d80] Running
	I0510 17:53:21.446373  396583 system_pods.go:89] "metrics-server-7fbb699795-4svvf" [e6310a8d-fbdc-463e-af85-5ad9c1e1cf86] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0510 17:53:21.446386  396583 system_pods.go:89] "nvidia-device-plugin-daemonset-2qlw4" [3edbc82d-3c0a-4ac4-b43c-ac2363e24f12] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0510 17:53:21.446409  396583 system_pods.go:89] "registry-694bd45846-w5zm2" [2792955b-c0bc-4f02-93dd-2d7bb14fb09b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0510 17:53:21.446423  396583 system_pods.go:89] "registry-proxy-wkzrw" [8d763450-a4fa-4fe8-8481-43644617e2bb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0510 17:53:21.446429  396583 system_pods.go:89] "snapshot-controller-68b874b76f-ckkzp" [3a1f1631-aa82-4bea-aa64-9585bdab8cae] Pending
	I0510 17:53:21.446440  396583 system_pods.go:89] "snapshot-controller-68b874b76f-v7bvq" [6b80d3e6-f564-44e7-9a68-f68a048289f6] Pending
	I0510 17:53:21.446449  396583 system_pods.go:89] "storage-provisioner" [98a4f21e-497b-4a9a-b4ee-a8c30688ea41] Running
	I0510 17:53:21.446459  396583 system_pods.go:126] duration metric: took 65.128111ms to wait for k8s-apps to be running ...
	I0510 17:53:21.446475  396583 system_svc.go:44] waiting for kubelet service to be running ....
	I0510 17:53:21.446543  396583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0510 17:53:21.520974  396583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0510 17:53:21.764615  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:21.840359  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:22.217595  396583 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.87481216s)
	I0510 17:53:22.217625  396583 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.271566868s)
	I0510 17:53:22.217659  396583 main.go:141] libmachine: Making call to close driver server
	I0510 17:53:22.217678  396583 main.go:141] libmachine: (addons-573653) Calling .Close
	I0510 17:53:22.217677  396583 system_svc.go:56] duration metric: took 771.193315ms WaitForService to wait for kubelet
	I0510 17:53:22.217701  396583 kubeadm.go:578] duration metric: took 12.054473371s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0510 17:53:22.217735  396583 node_conditions.go:102] verifying NodePressure condition ...
	I0510 17:53:22.218045  396583 main.go:141] libmachine: (addons-573653) DBG | Closing plugin on server side
	I0510 17:53:22.218110  396583 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:53:22.218131  396583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:53:22.218153  396583 main.go:141] libmachine: Making call to close driver server
	I0510 17:53:22.218165  396583 main.go:141] libmachine: (addons-573653) Calling .Close
	I0510 17:53:22.218439  396583 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:53:22.218462  396583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:53:22.218474  396583 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-573653"
	I0510 17:53:22.219559  396583 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0510 17:53:22.220429  396583 out.go:177] * Verifying csi-hostpath-driver addon...
	I0510 17:53:22.221863  396583 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0510 17:53:22.222563  396583 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0510 17:53:22.222970  396583 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0510 17:53:22.222988  396583 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0510 17:53:22.254273  396583 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0510 17:53:22.254305  396583 node_conditions.go:123] node cpu capacity is 2
	I0510 17:53:22.254321  396583 node_conditions.go:105] duration metric: took 36.477093ms to run NodePressure ...
	I0510 17:53:22.254336  396583 start.go:241] waiting for startup goroutines ...
	I0510 17:53:22.259501  396583 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0510 17:53:22.259528  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:22.311957  396583 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0510 17:53:22.311990  396583 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0510 17:53:22.331348  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:22.331368  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:22.441246  396583 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0510 17:53:22.441283  396583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0510 17:53:22.533160  396583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0510 17:53:22.728492  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:22.749463  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:22.750064  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:23.226762  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:23.248542  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:23.248688  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:23.271581  396583 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.750540944s)
	I0510 17:53:23.271659  396583 main.go:141] libmachine: Making call to close driver server
	I0510 17:53:23.271678  396583 main.go:141] libmachine: (addons-573653) Calling .Close
	I0510 17:53:23.272076  396583 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:53:23.272098  396583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:53:23.272110  396583 main.go:141] libmachine: Making call to close driver server
	I0510 17:53:23.272119  396583 main.go:141] libmachine: (addons-573653) Calling .Close
	I0510 17:53:23.272385  396583 main.go:141] libmachine: (addons-573653) DBG | Closing plugin on server side
	I0510 17:53:23.272451  396583 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:53:23.272467  396583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:53:23.739576  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:23.754198  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:23.754211  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:23.933486  396583 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.400271264s)
	I0510 17:53:23.933556  396583 main.go:141] libmachine: Making call to close driver server
	I0510 17:53:23.933570  396583 main.go:141] libmachine: (addons-573653) Calling .Close
	I0510 17:53:23.933957  396583 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:53:23.934013  396583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:53:23.934028  396583 main.go:141] libmachine: Making call to close driver server
	I0510 17:53:23.934037  396583 main.go:141] libmachine: (addons-573653) Calling .Close
	I0510 17:53:23.934290  396583 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:53:23.934338  396583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:53:23.934351  396583 main.go:141] libmachine: (addons-573653) DBG | Closing plugin on server side
	I0510 17:53:23.935357  396583 addons.go:479] Verifying addon gcp-auth=true in "addons-573653"
	I0510 17:53:23.937991  396583 out.go:177] * Verifying gcp-auth addon...
	I0510 17:53:23.940006  396583 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0510 17:53:23.973955  396583 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0510 17:53:23.973982  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:24.227250  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:24.250355  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:24.250450  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:24.443633  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:24.727204  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:24.748334  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:24.748836  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:24.944085  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:25.227013  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:25.247799  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:25.248671  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:25.524132  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:25.726639  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:25.748330  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:25.748485  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:25.945688  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:26.227380  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:26.247170  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:26.247253  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:26.443676  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:26.725958  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:26.747893  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:26.748966  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:26.942934  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:27.226814  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:27.248007  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:27.248457  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:27.443974  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:27.727335  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:27.747379  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:27.748372  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:27.943388  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:28.226724  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:28.248650  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:28.249740  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:28.444339  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:28.726855  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:28.748675  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:28.748822  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:28.943725  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:29.226398  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:29.247340  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:29.247444  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:29.444982  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:29.846897  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:29.847255  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:29.847311  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:29.944145  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:30.226257  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:30.248215  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:30.248218  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:30.443006  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:30.726080  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:30.747460  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:30.747756  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:30.944257  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:31.226484  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:31.247402  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:31.248022  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:31.445299  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:31.727390  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:31.748126  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:31.748148  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:31.943919  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:32.226466  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:32.247738  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:32.247805  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:32.443952  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:32.726203  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:32.747091  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:32.747531  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:32.943860  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:33.754773  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:33.754817  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:33.755050  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:33.755178  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:33.759517  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:33.760251  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:33.760959  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:33.945868  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:34.226993  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:34.249193  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:34.249327  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:34.443496  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:34.726667  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:34.747393  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:34.748439  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:34.944518  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:35.227448  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:35.247324  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:35.249360  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:35.444915  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:35.726111  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:35.747130  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:35.748110  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:35.943451  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:36.229565  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:36.247833  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:36.247839  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:36.443448  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:36.726216  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:36.747062  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:36.747385  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:36.949122  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:37.227596  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:37.247515  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:37.247635  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:37.445188  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:37.727158  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:37.748525  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:37.748562  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:37.944117  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:38.226970  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:38.248364  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:38.248664  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:38.444711  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:38.725824  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:38.748394  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:38.748557  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:38.943716  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:39.225940  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:39.248630  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:39.248734  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:39.459900  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:39.726761  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:39.747865  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:39.748106  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:40.237576  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:40.237830  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:40.334522  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:40.334796  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:40.443867  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:40.726267  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:40.747993  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:40.748118  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:40.943831  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:41.226738  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:41.248279  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:41.248682  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:41.443715  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:41.726170  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:41.748667  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:41.749004  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:41.944252  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:42.229157  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:42.248002  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:42.248343  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:42.444109  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:42.726719  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:42.748162  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:42.748283  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:42.943437  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:43.227174  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:43.248247  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:43.249059  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:43.443256  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:43.728204  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:43.747217  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:43.747651  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:43.996028  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:44.227914  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:44.273913  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:44.274051  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:44.443965  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:44.726788  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:44.748518  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:44.748685  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:44.944222  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:45.226392  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:45.247526  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:45.248571  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:45.443545  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:45.727447  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:45.747493  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:45.747754  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:45.944093  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:46.227259  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:46.247169  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:46.247183  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:46.444347  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:46.726916  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:46.748046  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:46.748312  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:46.943299  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:47.228126  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:47.248481  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:47.249438  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:47.447354  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:47.726638  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:47.748423  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:47.749148  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:47.943183  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:48.227341  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:48.247617  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:48.247721  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:48.444575  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:48.726300  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:48.747763  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:48.747847  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:48.944151  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:49.226480  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:49.259927  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:49.259968  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:49.791277  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:49.791337  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:49.791428  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:49.792832  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:49.943643  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:50.228438  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:50.330062  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:50.330062  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:50.442709  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:50.726879  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:50.748960  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:50.749129  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:50.943206  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:51.226484  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:51.248041  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:51.248058  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:51.444032  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:51.726915  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:51.748007  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:51.748123  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:51.942813  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:52.226308  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:52.247987  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:52.248179  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:52.443504  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:52.725879  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:52.748215  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:52.748256  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:52.943644  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:53.226260  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:53.247870  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:53.248590  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:53.444124  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:53.726812  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:53.748890  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:53.749069  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:53.944528  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:54.226249  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:54.256480  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:54.256609  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:54.444317  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:54.726063  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:54.752907  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:54.752966  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:54.944505  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:55.225774  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:55.251796  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:55.252145  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:55.443817  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:55.726559  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:55.754636  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:55.754718  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:55.946481  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:56.226206  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:56.247654  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:56.248601  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:56.444280  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:56.727708  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:56.834732  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:56.834788  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:56.944346  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:57.226905  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:57.248674  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:57.249161  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:57.448120  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:57.741174  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:57.765903  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:57.766065  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:57.943041  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:58.226652  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:58.248315  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:58.248406  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:58.444036  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:58.727552  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:58.750714  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:58.750726  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:58.943602  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:59.226234  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:59.248437  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:59.248644  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:59.521202  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:53:59.726825  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:53:59.748330  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:53:59.748340  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:53:59.943790  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:00.229128  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:00.247767  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:54:00.250007  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:00.444159  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:00.728953  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:00.747495  396583 kapi.go:107] duration metric: took 41.003662043s to wait for kubernetes.io/minikube-addons=registry ...
	I0510 17:54:00.747757  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:00.944756  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:01.226003  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:01.248406  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:01.443793  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:01.726657  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:01.747583  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:01.943568  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:02.226566  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:02.247441  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:02.444676  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:02.727955  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:02.748474  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:02.944676  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:03.225947  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:03.247848  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:03.444365  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:03.727130  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:03.748213  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:03.942994  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:04.227386  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:04.246956  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:04.445177  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:04.732655  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:04.755430  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:04.944341  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:05.226071  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:05.247908  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:05.443429  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:05.726150  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:05.747128  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:05.942931  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:06.226985  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:06.247850  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:06.444087  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:06.727527  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:06.747650  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:06.943904  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:07.682679  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:07.700098  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:07.700182  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:07.780683  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:07.782349  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:07.943506  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:08.232768  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:08.257042  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:08.447904  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:08.726733  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:08.747385  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:08.944499  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:09.227392  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:09.248360  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:09.443242  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:09.726802  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:09.747610  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:09.944574  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:10.228800  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:10.329191  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:10.448712  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:10.726249  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:10.746972  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:10.944376  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:11.225742  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:11.247611  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:11.444373  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:11.727311  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:11.750443  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:11.947932  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:12.226444  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:12.249958  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:12.445033  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:12.726730  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:12.747716  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:12.944053  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:13.226879  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:13.248246  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:13.445177  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:13.727647  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:13.748185  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:13.943769  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:14.227420  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:14.247848  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:14.444432  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:14.727960  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:14.972854  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:15.189258  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:15.228373  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:15.249094  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:15.446303  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:15.727800  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:15.747929  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:15.951816  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:16.226276  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:16.247672  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:16.448723  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:16.727172  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:16.827743  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:16.944424  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:17.227089  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:17.246959  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:17.443996  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:17.726608  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:17.747345  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:17.943196  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:18.228098  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:18.246805  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:18.444293  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:18.727604  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:18.747884  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:18.943733  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:19.226476  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:19.327814  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:19.447084  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:19.727360  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:19.747061  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:19.943398  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:20.227017  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:20.248093  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:20.448296  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:20.727186  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:20.748019  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:20.943357  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:21.229330  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:21.247222  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:21.444194  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:21.729103  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:21.747988  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:21.944964  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:22.226625  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:22.248073  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:22.443218  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:23.075268  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:23.076673  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:23.076718  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:23.227135  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:23.247579  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:23.443741  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:23.726986  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:54:23.747470  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:23.944180  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:24.228527  396583 kapi.go:107] duration metric: took 1m2.005953731s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0510 17:54:24.247447  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:24.443191  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:24.749142  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:24.944901  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:25.249359  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:25.445634  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:25.748080  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:25.942857  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:26.248073  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:26.447916  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:26.747263  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:26.943099  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:27.248165  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:27.443005  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:27.747425  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:27.943617  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:28.248588  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:28.443874  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:28.747641  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:28.944020  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:29.247709  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:29.444627  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:29.748607  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:29.950049  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:30.247217  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:30.444105  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:30.748401  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:30.943704  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:31.254393  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:31.445777  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:31.747473  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:31.943645  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:32.248348  396583 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:54:32.449877  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:32.748808  396583 kapi.go:107] duration metric: took 1m13.004998127s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0510 17:54:32.956288  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:33.456086  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:33.943883  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:34.445588  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:34.944017  396583 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:54:35.443833  396583 kapi.go:107] duration metric: took 1m11.503827297s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0510 17:54:35.445551  396583 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-573653 cluster.
	I0510 17:54:35.447087  396583 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0510 17:54:35.448480  396583 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0510 17:54:35.449863  396583 out.go:177] * Enabled addons: amd-gpu-device-plugin, storage-provisioner, cloud-spanner, nvidia-device-plugin, ingress-dns, storage-provisioner-rancher, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0510 17:54:35.450937  396583 addons.go:514] duration metric: took 1m25.287813051s for enable addons: enabled=[amd-gpu-device-plugin storage-provisioner cloud-spanner nvidia-device-plugin ingress-dns storage-provisioner-rancher inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0510 17:54:35.451003  396583 start.go:246] waiting for cluster config update ...
	I0510 17:54:35.451032  396583 start.go:255] writing updated cluster config ...
	I0510 17:54:35.451326  396583 ssh_runner.go:195] Run: rm -f paused
	I0510 17:54:35.457611  396583 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0510 17:54:35.544463  396583 pod_ready.go:83] waiting for pod "coredns-674b8bbfcf-ng4h8" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:54:35.550636  396583 pod_ready.go:94] pod "coredns-674b8bbfcf-ng4h8" is "Ready"
	I0510 17:54:35.550672  396583 pod_ready.go:86] duration metric: took 6.168177ms for pod "coredns-674b8bbfcf-ng4h8" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:54:35.553823  396583 pod_ready.go:83] waiting for pod "etcd-addons-573653" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:54:35.559210  396583 pod_ready.go:94] pod "etcd-addons-573653" is "Ready"
	I0510 17:54:35.559298  396583 pod_ready.go:86] duration metric: took 5.382893ms for pod "etcd-addons-573653" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:54:35.562282  396583 pod_ready.go:83] waiting for pod "kube-apiserver-addons-573653" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:54:35.567585  396583 pod_ready.go:94] pod "kube-apiserver-addons-573653" is "Ready"
	I0510 17:54:35.567609  396583 pod_ready.go:86] duration metric: took 5.30701ms for pod "kube-apiserver-addons-573653" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:54:35.570237  396583 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-573653" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:54:35.862335  396583 pod_ready.go:94] pod "kube-controller-manager-addons-573653" is "Ready"
	I0510 17:54:35.862375  396583 pod_ready.go:86] duration metric: took 292.109957ms for pod "kube-controller-manager-addons-573653" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:54:36.062640  396583 pod_ready.go:83] waiting for pod "kube-proxy-vhxfm" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:54:36.462762  396583 pod_ready.go:94] pod "kube-proxy-vhxfm" is "Ready"
	I0510 17:54:36.462797  396583 pod_ready.go:86] duration metric: took 400.123127ms for pod "kube-proxy-vhxfm" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:54:36.663338  396583 pod_ready.go:83] waiting for pod "kube-scheduler-addons-573653" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:54:37.062631  396583 pod_ready.go:94] pod "kube-scheduler-addons-573653" is "Ready"
	I0510 17:54:37.062666  396583 pod_ready.go:86] duration metric: took 399.301899ms for pod "kube-scheduler-addons-573653" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:54:37.062680  396583 pod_ready.go:40] duration metric: took 1.605022805s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0510 17:54:37.109560  396583 start.go:607] kubectl: 1.33.0, cluster: 1.33.0 (minor skew: 0)
	I0510 17:54:37.111527  396583 out.go:177] * Done! kubectl is now configured to use "addons-573653" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 10 17:57:46 addons-573653 crio[855]: time="2025-05-10 17:57:46.509085811Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746899866509058542,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603048,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b5c486e2-9eaf-4030-bd46-a3373ce5bd45 name=/runtime.v1.ImageService/ImageFsInfo
	May 10 17:57:46 addons-573653 crio[855]: time="2025-05-10 17:57:46.509924916Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6092d473-1805-4f5b-9568-1923ed291f29 name=/runtime.v1.RuntimeService/ListContainers
	May 10 17:57:46 addons-573653 crio[855]: time="2025-05-10 17:57:46.510012039Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6092d473-1805-4f5b-9568-1923ed291f29 name=/runtime.v1.RuntimeService/ListContainers
	May 10 17:57:46 addons-573653 crio[855]: time="2025-05-10 17:57:46.510337705Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6dd654d18521b56b4ec819bcfad77d306e33631a230ff043db77d7ab89b171b,PodSandboxId:a965d7764846cf9d9b5282b769985f2f84a089fc6946e5042e6ed4971e6449f4,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1746899866361472673,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-7d9564db4-95ppt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 21902b6a-8c5a-40cd-a3c2-7f45f3221b6a,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6edd817eaccc76427e71ebf123ae857d788248a10fb08e9017f51bd68cfd1e2c,PodSandboxId:8414ff9c4576b5a9fec43a6d2a378284e58ae09d5ef30105bbb7e31e72958b5a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:62223d644fa234c3a1cc785ee14242ec47a77364226f1c811d2f669f96dc2ac8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6769dc3a703c719c1d2756bda113659be28ae16cf0da58dd5fd823d6b9a050ea,State:CONTAINER_RUNNING,CreatedAt:1746899726976335202,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 99d0701e-2474-4490-adc0-d9078c08bee4,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e40eb98f106a50906505731c2dc1ee0a0d51904474e8d9d8afbf04efc46687d,PodSandboxId:80c06c24ece23c60c27d41e6427d5e6400bb2c249b794879d854bc3173b15c30,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1746899680639795411,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d50a2768-dbe5-442b-b3
a0-5dc397a99a69,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5b6e50764a1243e201c9fac19ad1c2f704c3c38dba39c48de741cf4fc8a522a,PodSandboxId:c2dc6372e0e83545396dcd501dc476d24f140877180e890fd66bed15c524c3c0,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1746899671325072061,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7c9f76cd49-c2vhb,io.kubernetes.pod.namespace: ingress-nginx,io
.kubernetes.pod.uid: deb79d17-d72b-4a13-9572-6dc927470944,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b7e884c12f55b6ec5beeabbf81e9ff774ee12232ae23d93ed99bf8ea7ab7c910,PodSandboxId:7ea6cd16719c55dec59442df0ccdb745c6209eb2a85ccde8f776aa4a5d4be747,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff
8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1746899643829090920,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-lqpfr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: eb5bf4ab-f305-415b-a0b7-edec4fca3d37,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa320dc25a4b7544181c29eff84a1a0a9e66dd2ba5b7f901bfece778bf55b005,PodSandboxId:a197c8a158f28f3f4a2716b50706e581d7ddc1c199e486d8b8b24098c0891118,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbf
bb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1746899642543388666,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4v7j7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9313dc25-9fa6-444e-ae30-19d0982a0ab3,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f1973290582db2b546cfc5ecf153f012a638436cab98cec29d22c965ebf9c45,PodSandboxId:1054e269b365f57fc94525e37e28a8f5ed2ce87808c3989c44e8d8ec00dd6c54,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube
/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1746899623051560522,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15d89e41-22de-487e-8436-801252358cf7,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cce31af5f8769969248bbeacd59b711b5cfbf85e133668f009ab1629adc0b6e,PodSandboxId:1c48dd822bd5d5b17ff6ce0d8e189d32c8cf
fc2ed6958e4b0eea9d5b7192b446,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1746899597286547513,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98a4f21e-497b-4a9a-b4ee-a8c30688ea41,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90822645b04b1b46a5f0c742bab13f4f88dc45cebf8572d0bf574438d81ec390,PodSandboxId:bac25ba312c37ddcaa5c9bfd69ce4797519b9a39ead6be78
41f3db7cc6f0137d,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1746899597119998368,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-6bgxv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f404a532-f8a5-4910-b4fd-829feef931fb,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:524dab1f7225ab150d072d3ab6f462c33eca016c9fc41df0d6eca1dead64df03,PodSandboxId:2c1fa3d5
2a945da0edb9b4580945cc4e62a693547d7f11c990b2a5fd9abd2bdb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1746899592349635393,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-ng4h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27c21a48-fba1-4ce8-b143-5060ef2e095d,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:207c61dbf088cda58109b444a955b23d8dab0912dc10afed0c59d1a9d82de721,PodSandboxId:a9dc3dc9623be09aa14aa5ea7f4e1ab6f72c8f52c65c9287fa58a8559799394b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_RUNNING,CreatedAt:1746899591847475181,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vhxfm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e9851f7-f262-4178-a042-dc33b7249403,},Annotations:map[string]string{io.kubernetes.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21ca5d56715c241aa417e8b5cc2214306af49fb616b06ec5efd7b147159fb5ee,PodSandboxId:a14092b7353f4399af6eecac59ff17fbb05953986f012a3142a0ad5a0e5377db,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,State:CONTAINER_RUNNING,CreatedAt:1746899579740455571,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-573653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af14e4298804f6652203bedef0d5fac1,},Annotations:map[string]string{io.kubernetes.container.hash: fd54b99d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5314135b24b036d09c5a5ad6fe5eae2bc592daa59fec03ed7bb93b54d11f5f8,PodSandboxId:0837b915b9f06f9d73256b1df30b89b63f741a4f0298ce4548986446f90e8bcb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_RUNNING,CreatedAt:1746899579760773905,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-573653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9dbec509d86258fc7c292029613cd58,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d9643e5261749222b6ce4a8ea3461d3482d18ab904c90cb3dae2a3a46d20d98,PodSandboxId:c995748ece4d1c57f742fb18736ddb37a81dccf07455326dd88fdc88a6f1ebb2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1746899579821029947,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-573653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 337a50b11c80dd43979374987a91fb64,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:a3e9984f17f70d54dc58f16aebd59d5838104af1a74cf4a50d966447508cc65a,PodSandboxId:3369a35b1dc99d0b03daa87fcb2d88e57c57f72950e1aa5a7b6690c87e1500c3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,State:CONTAINER_RUNNING,CreatedAt:1746899579723015146,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-573653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1cfbe91fbb233869676d7d6c4f43282,},Annotations:map[string]string{io.kubernetes.container.hash: 2e2dc675,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file=
"otel-collector/interceptors.go:74" id=6092d473-1805-4f5b-9568-1923ed291f29 name=/runtime.v1.RuntimeService/ListContainers
	May 10 17:57:46 addons-573653 crio[855]: time="2025-05-10 17:57:46.560527641Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=649c0f8e-4f5c-4961-a8a7-b70af3842eb2 name=/runtime.v1.RuntimeService/Version
	May 10 17:57:46 addons-573653 crio[855]: time="2025-05-10 17:57:46.560615565Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=649c0f8e-4f5c-4961-a8a7-b70af3842eb2 name=/runtime.v1.RuntimeService/Version
	May 10 17:57:46 addons-573653 crio[855]: time="2025-05-10 17:57:46.561953139Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=23cd3a63-b638-44ae-8dc9-14188a8143bd name=/runtime.v1.ImageService/ImageFsInfo
	May 10 17:57:46 addons-573653 crio[855]: time="2025-05-10 17:57:46.563231251Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746899866563196632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603048,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=23cd3a63-b638-44ae-8dc9-14188a8143bd name=/runtime.v1.ImageService/ImageFsInfo
	May 10 17:57:46 addons-573653 crio[855]: time="2025-05-10 17:57:46.564119889Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bd8df46f-9369-499f-ab02-c53386601467 name=/runtime.v1.RuntimeService/ListContainers
	May 10 17:57:46 addons-573653 crio[855]: time="2025-05-10 17:57:46.564217279Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bd8df46f-9369-499f-ab02-c53386601467 name=/runtime.v1.RuntimeService/ListContainers
	May 10 17:57:46 addons-573653 crio[855]: time="2025-05-10 17:57:46.564548675Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6dd654d18521b56b4ec819bcfad77d306e33631a230ff043db77d7ab89b171b,PodSandboxId:a965d7764846cf9d9b5282b769985f2f84a089fc6946e5042e6ed4971e6449f4,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1746899866361472673,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-7d9564db4-95ppt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 21902b6a-8c5a-40cd-a3c2-7f45f3221b6a,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6edd817eaccc76427e71ebf123ae857d788248a10fb08e9017f51bd68cfd1e2c,PodSandboxId:8414ff9c4576b5a9fec43a6d2a378284e58ae09d5ef30105bbb7e31e72958b5a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:62223d644fa234c3a1cc785ee14242ec47a77364226f1c811d2f669f96dc2ac8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6769dc3a703c719c1d2756bda113659be28ae16cf0da58dd5fd823d6b9a050ea,State:CONTAINER_RUNNING,CreatedAt:1746899726976335202,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 99d0701e-2474-4490-adc0-d9078c08bee4,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e40eb98f106a50906505731c2dc1ee0a0d51904474e8d9d8afbf04efc46687d,PodSandboxId:80c06c24ece23c60c27d41e6427d5e6400bb2c249b794879d854bc3173b15c30,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1746899680639795411,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d50a2768-dbe5-442b-b3
a0-5dc397a99a69,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5b6e50764a1243e201c9fac19ad1c2f704c3c38dba39c48de741cf4fc8a522a,PodSandboxId:c2dc6372e0e83545396dcd501dc476d24f140877180e890fd66bed15c524c3c0,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1746899671325072061,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7c9f76cd49-c2vhb,io.kubernetes.pod.namespace: ingress-nginx,io
.kubernetes.pod.uid: deb79d17-d72b-4a13-9572-6dc927470944,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b7e884c12f55b6ec5beeabbf81e9ff774ee12232ae23d93ed99bf8ea7ab7c910,PodSandboxId:7ea6cd16719c55dec59442df0ccdb745c6209eb2a85ccde8f776aa4a5d4be747,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff
8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1746899643829090920,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-lqpfr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: eb5bf4ab-f305-415b-a0b7-edec4fca3d37,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa320dc25a4b7544181c29eff84a1a0a9e66dd2ba5b7f901bfece778bf55b005,PodSandboxId:a197c8a158f28f3f4a2716b50706e581d7ddc1c199e486d8b8b24098c0891118,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbf
bb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1746899642543388666,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4v7j7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9313dc25-9fa6-444e-ae30-19d0982a0ab3,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f1973290582db2b546cfc5ecf153f012a638436cab98cec29d22c965ebf9c45,PodSandboxId:1054e269b365f57fc94525e37e28a8f5ed2ce87808c3989c44e8d8ec00dd6c54,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube
/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1746899623051560522,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15d89e41-22de-487e-8436-801252358cf7,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cce31af5f8769969248bbeacd59b711b5cfbf85e133668f009ab1629adc0b6e,PodSandboxId:1c48dd822bd5d5b17ff6ce0d8e189d32c8cf
fc2ed6958e4b0eea9d5b7192b446,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1746899597286547513,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98a4f21e-497b-4a9a-b4ee-a8c30688ea41,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90822645b04b1b46a5f0c742bab13f4f88dc45cebf8572d0bf574438d81ec390,PodSandboxId:bac25ba312c37ddcaa5c9bfd69ce4797519b9a39ead6be78
41f3db7cc6f0137d,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1746899597119998368,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-6bgxv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f404a532-f8a5-4910-b4fd-829feef931fb,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:524dab1f7225ab150d072d3ab6f462c33eca016c9fc41df0d6eca1dead64df03,PodSandboxId:2c1fa3d5
2a945da0edb9b4580945cc4e62a693547d7f11c990b2a5fd9abd2bdb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1746899592349635393,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-ng4h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27c21a48-fba1-4ce8-b143-5060ef2e095d,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:207c61dbf088cda58109b444a955b23d8dab0912dc10afed0c59d1a9d82de721,PodSandboxId:a9dc3dc9623be09aa14aa5ea7f4e1ab6f72c8f52c65c9287fa58a8559799394b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_RUNNING,CreatedAt:1746899591847475181,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vhxfm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e9851f7-f262-4178-a042-dc33b7249403,},Annotations:map[string]string{io.kubernetes.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21ca5d56715c241aa417e8b5cc2214306af49fb616b06ec5efd7b147159fb5ee,PodSandboxId:a14092b7353f4399af6eecac59ff17fbb05953986f012a3142a0ad5a0e5377db,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,State:CONTAINER_RUNNING,CreatedAt:1746899579740455571,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-573653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af14e4298804f6652203bedef0d5fac1,},Annotations:map[string]string{io.kubernetes.container.hash: fd54b99d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5314135b24b036d09c5a5ad6fe5eae2bc592daa59fec03ed7bb93b54d11f5f8,PodSandboxId:0837b915b9f06f9d73256b1df30b89b63f741a4f0298ce4548986446f90e8bcb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_RUNNING,CreatedAt:1746899579760773905,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-573653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9dbec509d86258fc7c292029613cd58,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d9643e5261749222b6ce4a8ea3461d3482d18ab904c90cb3dae2a3a46d20d98,PodSandboxId:c995748ece4d1c57f742fb18736ddb37a81dccf07455326dd88fdc88a6f1ebb2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1746899579821029947,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-573653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 337a50b11c80dd43979374987a91fb64,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:a3e9984f17f70d54dc58f16aebd59d5838104af1a74cf4a50d966447508cc65a,PodSandboxId:3369a35b1dc99d0b03daa87fcb2d88e57c57f72950e1aa5a7b6690c87e1500c3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,State:CONTAINER_RUNNING,CreatedAt:1746899579723015146,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-573653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1cfbe91fbb233869676d7d6c4f43282,},Annotations:map[string]string{io.kubernetes.container.hash: 2e2dc675,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file=
"otel-collector/interceptors.go:74" id=bd8df46f-9369-499f-ab02-c53386601467 name=/runtime.v1.RuntimeService/ListContainers
	May 10 17:57:46 addons-573653 crio[855]: time="2025-05-10 17:57:46.603561443Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cda0437e-3cbe-4e8a-ade8-e92d72b5bc2c name=/runtime.v1.RuntimeService/Version
	May 10 17:57:46 addons-573653 crio[855]: time="2025-05-10 17:57:46.603730566Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cda0437e-3cbe-4e8a-ade8-e92d72b5bc2c name=/runtime.v1.RuntimeService/Version
	May 10 17:57:46 addons-573653 crio[855]: time="2025-05-10 17:57:46.605757440Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2511eeef-f57a-4797-946a-fb4a33b5d692 name=/runtime.v1.ImageService/ImageFsInfo
	May 10 17:57:46 addons-573653 crio[855]: time="2025-05-10 17:57:46.607171311Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746899866607142740,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603048,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2511eeef-f57a-4797-946a-fb4a33b5d692 name=/runtime.v1.ImageService/ImageFsInfo
	May 10 17:57:46 addons-573653 crio[855]: time="2025-05-10 17:57:46.608210438Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0afcada9-73eb-4d3a-b409-3342924bad23 name=/runtime.v1.RuntimeService/ListContainers
	May 10 17:57:46 addons-573653 crio[855]: time="2025-05-10 17:57:46.608823801Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0afcada9-73eb-4d3a-b409-3342924bad23 name=/runtime.v1.RuntimeService/ListContainers
	May 10 17:57:46 addons-573653 crio[855]: time="2025-05-10 17:57:46.609146990Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6dd654d18521b56b4ec819bcfad77d306e33631a230ff043db77d7ab89b171b,PodSandboxId:a965d7764846cf9d9b5282b769985f2f84a089fc6946e5042e6ed4971e6449f4,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1746899866361472673,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-7d9564db4-95ppt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 21902b6a-8c5a-40cd-a3c2-7f45f3221b6a,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6edd817eaccc76427e71ebf123ae857d788248a10fb08e9017f51bd68cfd1e2c,PodSandboxId:8414ff9c4576b5a9fec43a6d2a378284e58ae09d5ef30105bbb7e31e72958b5a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:62223d644fa234c3a1cc785ee14242ec47a77364226f1c811d2f669f96dc2ac8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6769dc3a703c719c1d2756bda113659be28ae16cf0da58dd5fd823d6b9a050ea,State:CONTAINER_RUNNING,CreatedAt:1746899726976335202,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 99d0701e-2474-4490-adc0-d9078c08bee4,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e40eb98f106a50906505731c2dc1ee0a0d51904474e8d9d8afbf04efc46687d,PodSandboxId:80c06c24ece23c60c27d41e6427d5e6400bb2c249b794879d854bc3173b15c30,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1746899680639795411,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d50a2768-dbe5-442b-b3
a0-5dc397a99a69,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5b6e50764a1243e201c9fac19ad1c2f704c3c38dba39c48de741cf4fc8a522a,PodSandboxId:c2dc6372e0e83545396dcd501dc476d24f140877180e890fd66bed15c524c3c0,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1746899671325072061,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7c9f76cd49-c2vhb,io.kubernetes.pod.namespace: ingress-nginx,io
.kubernetes.pod.uid: deb79d17-d72b-4a13-9572-6dc927470944,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b7e884c12f55b6ec5beeabbf81e9ff774ee12232ae23d93ed99bf8ea7ab7c910,PodSandboxId:7ea6cd16719c55dec59442df0ccdb745c6209eb2a85ccde8f776aa4a5d4be747,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff
8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1746899643829090920,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-lqpfr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: eb5bf4ab-f305-415b-a0b7-edec4fca3d37,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa320dc25a4b7544181c29eff84a1a0a9e66dd2ba5b7f901bfece778bf55b005,PodSandboxId:a197c8a158f28f3f4a2716b50706e581d7ddc1c199e486d8b8b24098c0891118,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbf
bb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1746899642543388666,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4v7j7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9313dc25-9fa6-444e-ae30-19d0982a0ab3,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f1973290582db2b546cfc5ecf153f012a638436cab98cec29d22c965ebf9c45,PodSandboxId:1054e269b365f57fc94525e37e28a8f5ed2ce87808c3989c44e8d8ec00dd6c54,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube
/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1746899623051560522,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15d89e41-22de-487e-8436-801252358cf7,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cce31af5f8769969248bbeacd59b711b5cfbf85e133668f009ab1629adc0b6e,PodSandboxId:1c48dd822bd5d5b17ff6ce0d8e189d32c8cf
fc2ed6958e4b0eea9d5b7192b446,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1746899597286547513,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98a4f21e-497b-4a9a-b4ee-a8c30688ea41,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90822645b04b1b46a5f0c742bab13f4f88dc45cebf8572d0bf574438d81ec390,PodSandboxId:bac25ba312c37ddcaa5c9bfd69ce4797519b9a39ead6be78
41f3db7cc6f0137d,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1746899597119998368,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-6bgxv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f404a532-f8a5-4910-b4fd-829feef931fb,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:524dab1f7225ab150d072d3ab6f462c33eca016c9fc41df0d6eca1dead64df03,PodSandboxId:2c1fa3d5
2a945da0edb9b4580945cc4e62a693547d7f11c990b2a5fd9abd2bdb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1746899592349635393,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-ng4h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27c21a48-fba1-4ce8-b143-5060ef2e095d,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:207c61dbf088cda58109b444a955b23d8dab0912dc10afed0c59d1a9d82de721,PodSandboxId:a9dc3dc9623be09aa14aa5ea7f4e1ab6f72c8f52c65c9287fa58a8559799394b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_RUNNING,CreatedAt:1746899591847475181,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vhxfm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e9851f7-f262-4178-a042-dc33b7249403,},Annotations:map[string]string{io.kubernetes.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21ca5d56715c241aa417e8b5cc2214306af49fb616b06ec5efd7b147159fb5ee,PodSandboxId:a14092b7353f4399af6eecac59ff17fbb05953986f012a3142a0ad5a0e5377db,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,State:CONTAINER_RUNNING,CreatedAt:1746899579740455571,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-573653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af14e4298804f6652203bedef0d5fac1,},Annotations:map[string]string{io.kubernetes.container.hash: fd54b99d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5314135b24b036d09c5a5ad6fe5eae2bc592daa59fec03ed7bb93b54d11f5f8,PodSandboxId:0837b915b9f06f9d73256b1df30b89b63f741a4f0298ce4548986446f90e8bcb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_RUNNING,CreatedAt:1746899579760773905,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-573653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9dbec509d86258fc7c292029613cd58,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d9643e5261749222b6ce4a8ea3461d3482d18ab904c90cb3dae2a3a46d20d98,PodSandboxId:c995748ece4d1c57f742fb18736ddb37a81dccf07455326dd88fdc88a6f1ebb2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1746899579821029947,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-573653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 337a50b11c80dd43979374987a91fb64,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:a3e9984f17f70d54dc58f16aebd59d5838104af1a74cf4a50d966447508cc65a,PodSandboxId:3369a35b1dc99d0b03daa87fcb2d88e57c57f72950e1aa5a7b6690c87e1500c3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,State:CONTAINER_RUNNING,CreatedAt:1746899579723015146,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-573653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1cfbe91fbb233869676d7d6c4f43282,},Annotations:map[string]string{io.kubernetes.container.hash: 2e2dc675,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file=
"otel-collector/interceptors.go:74" id=0afcada9-73eb-4d3a-b409-3342924bad23 name=/runtime.v1.RuntimeService/ListContainers
	May 10 17:57:46 addons-573653 crio[855]: time="2025-05-10 17:57:46.647208453Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=20dca9bf-7cbc-4dd7-9daf-f838c6af8440 name=/runtime.v1.RuntimeService/Version
	May 10 17:57:46 addons-573653 crio[855]: time="2025-05-10 17:57:46.647319687Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=20dca9bf-7cbc-4dd7-9daf-f838c6af8440 name=/runtime.v1.RuntimeService/Version
	May 10 17:57:46 addons-573653 crio[855]: time="2025-05-10 17:57:46.648587988Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1be7c71f-aa90-494b-a56c-e29fc3eb526b name=/runtime.v1.ImageService/ImageFsInfo
	May 10 17:57:46 addons-573653 crio[855]: time="2025-05-10 17:57:46.650086265Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746899866650055984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603048,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1be7c71f-aa90-494b-a56c-e29fc3eb526b name=/runtime.v1.ImageService/ImageFsInfo
	May 10 17:57:46 addons-573653 crio[855]: time="2025-05-10 17:57:46.651289787Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8905f967-ca1b-4ced-bbd5-299bca366bac name=/runtime.v1.RuntimeService/ListContainers
	May 10 17:57:46 addons-573653 crio[855]: time="2025-05-10 17:57:46.651464158Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8905f967-ca1b-4ced-bbd5-299bca366bac name=/runtime.v1.RuntimeService/ListContainers
	May 10 17:57:46 addons-573653 crio[855]: time="2025-05-10 17:57:46.652277706Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6dd654d18521b56b4ec819bcfad77d306e33631a230ff043db77d7ab89b171b,PodSandboxId:a965d7764846cf9d9b5282b769985f2f84a089fc6946e5042e6ed4971e6449f4,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1746899866361472673,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-7d9564db4-95ppt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 21902b6a-8c5a-40cd-a3c2-7f45f3221b6a,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6edd817eaccc76427e71ebf123ae857d788248a10fb08e9017f51bd68cfd1e2c,PodSandboxId:8414ff9c4576b5a9fec43a6d2a378284e58ae09d5ef30105bbb7e31e72958b5a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:62223d644fa234c3a1cc785ee14242ec47a77364226f1c811d2f669f96dc2ac8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6769dc3a703c719c1d2756bda113659be28ae16cf0da58dd5fd823d6b9a050ea,State:CONTAINER_RUNNING,CreatedAt:1746899726976335202,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 99d0701e-2474-4490-adc0-d9078c08bee4,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e40eb98f106a50906505731c2dc1ee0a0d51904474e8d9d8afbf04efc46687d,PodSandboxId:80c06c24ece23c60c27d41e6427d5e6400bb2c249b794879d854bc3173b15c30,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1746899680639795411,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d50a2768-dbe5-442b-b3
a0-5dc397a99a69,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5b6e50764a1243e201c9fac19ad1c2f704c3c38dba39c48de741cf4fc8a522a,PodSandboxId:c2dc6372e0e83545396dcd501dc476d24f140877180e890fd66bed15c524c3c0,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1746899671325072061,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7c9f76cd49-c2vhb,io.kubernetes.pod.namespace: ingress-nginx,io
.kubernetes.pod.uid: deb79d17-d72b-4a13-9572-6dc927470944,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b7e884c12f55b6ec5beeabbf81e9ff774ee12232ae23d93ed99bf8ea7ab7c910,PodSandboxId:7ea6cd16719c55dec59442df0ccdb745c6209eb2a85ccde8f776aa4a5d4be747,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff
8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1746899643829090920,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-lqpfr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: eb5bf4ab-f305-415b-a0b7-edec4fca3d37,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa320dc25a4b7544181c29eff84a1a0a9e66dd2ba5b7f901bfece778bf55b005,PodSandboxId:a197c8a158f28f3f4a2716b50706e581d7ddc1c199e486d8b8b24098c0891118,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbf
bb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1746899642543388666,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4v7j7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9313dc25-9fa6-444e-ae30-19d0982a0ab3,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f1973290582db2b546cfc5ecf153f012a638436cab98cec29d22c965ebf9c45,PodSandboxId:1054e269b365f57fc94525e37e28a8f5ed2ce87808c3989c44e8d8ec00dd6c54,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube
/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1746899623051560522,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15d89e41-22de-487e-8436-801252358cf7,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cce31af5f8769969248bbeacd59b711b5cfbf85e133668f009ab1629adc0b6e,PodSandboxId:1c48dd822bd5d5b17ff6ce0d8e189d32c8cf
fc2ed6958e4b0eea9d5b7192b446,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1746899597286547513,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98a4f21e-497b-4a9a-b4ee-a8c30688ea41,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90822645b04b1b46a5f0c742bab13f4f88dc45cebf8572d0bf574438d81ec390,PodSandboxId:bac25ba312c37ddcaa5c9bfd69ce4797519b9a39ead6be78
41f3db7cc6f0137d,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1746899597119998368,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-6bgxv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f404a532-f8a5-4910-b4fd-829feef931fb,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:524dab1f7225ab150d072d3ab6f462c33eca016c9fc41df0d6eca1dead64df03,PodSandboxId:2c1fa3d5
2a945da0edb9b4580945cc4e62a693547d7f11c990b2a5fd9abd2bdb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1746899592349635393,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-ng4h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27c21a48-fba1-4ce8-b143-5060ef2e095d,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:207c61dbf088cda58109b444a955b23d8dab0912dc10afed0c59d1a9d82de721,PodSandboxId:a9dc3dc9623be09aa14aa5ea7f4e1ab6f72c8f52c65c9287fa58a8559799394b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_RUNNING,CreatedAt:1746899591847475181,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vhxfm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e9851f7-f262-4178-a042-dc33b7249403,},Annotations:map[string]string{io.kubernetes.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21ca5d56715c241aa417e8b5cc2214306af49fb616b06ec5efd7b147159fb5ee,PodSandboxId:a14092b7353f4399af6eecac59ff17fbb05953986f012a3142a0ad5a0e5377db,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,State:CONTAINER_RUNNING,CreatedAt:1746899579740455571,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-573653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af14e4298804f6652203bedef0d5fac1,},Annotations:map[string]string{io.kubernetes.container.hash: fd54b99d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5314135b24b036d09c5a5ad6fe5eae2bc592daa59fec03ed7bb93b54d11f5f8,PodSandboxId:0837b915b9f06f9d73256b1df30b89b63f741a4f0298ce4548986446f90e8bcb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_RUNNING,CreatedAt:1746899579760773905,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-573653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9dbec509d86258fc7c292029613cd58,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d9643e5261749222b6ce4a8ea3461d3482d18ab904c90cb3dae2a3a46d20d98,PodSandboxId:c995748ece4d1c57f742fb18736ddb37a81dccf07455326dd88fdc88a6f1ebb2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1746899579821029947,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-573653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 337a50b11c80dd43979374987a91fb64,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:a3e9984f17f70d54dc58f16aebd59d5838104af1a74cf4a50d966447508cc65a,PodSandboxId:3369a35b1dc99d0b03daa87fcb2d88e57c57f72950e1aa5a7b6690c87e1500c3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,State:CONTAINER_RUNNING,CreatedAt:1746899579723015146,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-573653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1cfbe91fbb233869676d7d6c4f43282,},Annotations:map[string]string{io.kubernetes.container.hash: 2e2dc675,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file=
"otel-collector/interceptors.go:74" id=8905f967-ca1b-4ced-bbd5-299bca366bac name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	d6dd654d18521       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   a965d7764846c       hello-world-app-7d9564db4-95ppt
	6edd817eaccc7       docker.io/library/nginx@sha256:62223d644fa234c3a1cc785ee14242ec47a77364226f1c811d2f669f96dc2ac8                              2 minutes ago            Running             nginx                     0                   8414ff9c4576b       nginx
	0e40eb98f106a       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago            Running             busybox                   0                   80c06c24ece23       busybox
	e5b6e50764a12       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago            Running             controller                0                   c2dc6372e0e83       ingress-nginx-controller-7c9f76cd49-c2vhb
	b7e884c12f55b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago            Exited              patch                     0                   7ea6cd16719c5       ingress-nginx-admission-patch-lqpfr
	fa320dc25a4b7       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago            Exited              create                    0                   a197c8a158f28       ingress-nginx-admission-create-4v7j7
	0f1973290582d       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago            Running             minikube-ingress-dns      0                   1054e269b365f       kube-ingress-dns-minikube
	2cce31af5f876       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago            Running             storage-provisioner       0                   1c48dd822bd5d       storage-provisioner
	90822645b04b1       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago            Running             amd-gpu-device-plugin     0                   bac25ba312c37       amd-gpu-device-plugin-6bgxv
	524dab1f7225a       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b                                                             4 minutes ago            Running             coredns                   0                   2c1fa3d52a945       coredns-674b8bbfcf-ng4h8
	207c61dbf088c       f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68                                                             4 minutes ago            Running             kube-proxy                0                   a9dc3dc9623be       kube-proxy-vhxfm
	8d9643e526174       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1                                                             4 minutes ago            Running             etcd                      0                   c995748ece4d1       etcd-addons-573653
	c5314135b24b0       1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02                                                             4 minutes ago            Running             kube-controller-manager   0                   0837b915b9f06       kube-controller-manager-addons-573653
	21ca5d56715c2       8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4                                                             4 minutes ago            Running             kube-scheduler            0                   a14092b7353f4       kube-scheduler-addons-573653
	a3e9984f17f70       6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4                                                             4 minutes ago            Running             kube-apiserver            0                   3369a35b1dc99       kube-apiserver-addons-573653
	
	
	==> coredns [524dab1f7225ab150d072d3ab6f462c33eca016c9fc41df0d6eca1dead64df03] <==
	[INFO] 10.244.0.9:48820 - 60075 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000129484s
	[INFO] 10.244.0.9:48820 - 19752 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000096214s
	[INFO] 10.244.0.9:48820 - 32393 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000523959s
	[INFO] 10.244.0.9:48820 - 31201 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000147146s
	[INFO] 10.244.0.9:48820 - 50407 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000081662s
	[INFO] 10.244.0.9:48820 - 7779 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.00014226s
	[INFO] 10.244.0.9:48820 - 32800 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000182241s
	[INFO] 10.244.0.9:49485 - 30724 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000248433s
	[INFO] 10.244.0.9:49485 - 30341 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000105979s
	[INFO] 10.244.0.9:41926 - 17172 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000132599s
	[INFO] 10.244.0.9:41926 - 16912 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00007056s
	[INFO] 10.244.0.9:36837 - 22962 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000102708s
	[INFO] 10.244.0.9:36837 - 23195 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00009754s
	[INFO] 10.244.0.9:49119 - 39026 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000111558s
	[INFO] 10.244.0.9:49119 - 39241 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000165821s
	[INFO] 10.244.0.23:35177 - 60367 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000647607s
	[INFO] 10.244.0.23:49683 - 5388 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000186526s
	[INFO] 10.244.0.23:33035 - 21919 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000135235s
	[INFO] 10.244.0.23:49344 - 60321 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000320928s
	[INFO] 10.244.0.23:56824 - 25922 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000130439s
	[INFO] 10.244.0.23:60913 - 53572 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000111277s
	[INFO] 10.244.0.23:42179 - 17463 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001322408s
	[INFO] 10.244.0.23:44403 - 5575 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001547749s
	[INFO] 10.244.0.26:39065 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000548348s
	[INFO] 10.244.0.26:39030 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000255972s
	
	
	==> describe nodes <==
	Name:               addons-573653
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-573653
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e96c83983357cd8557f3cdfe077a25cc73d485a4
	                    minikube.k8s.io/name=addons-573653
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_05_10T17_53_05_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-573653
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 May 2025 17:53:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-573653
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 May 2025 17:57:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 May 2025 17:55:39 +0000   Sat, 10 May 2025 17:53:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 May 2025 17:55:39 +0000   Sat, 10 May 2025 17:53:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 May 2025 17:55:39 +0000   Sat, 10 May 2025 17:53:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 May 2025 17:55:39 +0000   Sat, 10 May 2025 17:53:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.219
	  Hostname:    addons-573653
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912740Ki
	  pods:               110
	System Info:
	  Machine ID:                 17133487e28a420cbd8755eeb7aa9874
	  System UUID:                17133487-e28a-420c-bd87-55eeb7aa9874
	  Boot ID:                    e0f9f671-4b8c-456e-a1a1-7180eeb6628c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2024.11.2
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.33.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m9s
	  default                     hello-world-app-7d9564db4-95ppt              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  ingress-nginx               ingress-nginx-controller-7c9f76cd49-c2vhb    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m27s
	  kube-system                 amd-gpu-device-plugin-6bgxv                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 coredns-674b8bbfcf-ng4h8                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m36s
	  kube-system                 etcd-addons-573653                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m41s
	  kube-system                 kube-apiserver-addons-573653                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 kube-controller-manager-addons-573653        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  kube-system                 kube-proxy-vhxfm                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m36s
	  kube-system                 kube-scheduler-addons-573653                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m33s  kube-proxy       
	  Normal  Starting                 4m41s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m41s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m41s  kubelet          Node addons-573653 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m41s  kubelet          Node addons-573653 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m41s  kubelet          Node addons-573653 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m40s  kubelet          Node addons-573653 status is now: NodeReady
	  Normal  RegisteredNode           4m37s  node-controller  Node addons-573653 event: Registered Node addons-573653 in Controller
	
	
	==> dmesg <==
	[  +0.088610] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.107659] kauditd_printk_skb: 74 callbacks suppressed
	[May10 17:53] kauditd_printk_skb: 67 callbacks suppressed
	[  +0.000086] kauditd_printk_skb: 19 callbacks suppressed
	[  +2.533590] kauditd_printk_skb: 92 callbacks suppressed
	[  +1.652639] kauditd_printk_skb: 114 callbacks suppressed
	[ +11.642349] kauditd_printk_skb: 134 callbacks suppressed
	[  +4.648026] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.395433] kauditd_printk_skb: 2 callbacks suppressed
	[May10 17:54] kauditd_printk_skb: 31 callbacks suppressed
	[  +2.350802] kauditd_printk_skb: 21 callbacks suppressed
	[  +3.120805] kauditd_printk_skb: 12 callbacks suppressed
	[  +2.153786] kauditd_printk_skb: 34 callbacks suppressed
	[  +0.000046] kauditd_printk_skb: 22 callbacks suppressed
	[  +2.220519] kauditd_printk_skb: 25 callbacks suppressed
	[  +2.552373] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.112843] kauditd_printk_skb: 2 callbacks suppressed
	[May10 17:55] kauditd_printk_skb: 13 callbacks suppressed
	[  +0.052008] kauditd_printk_skb: 15 callbacks suppressed
	[  +2.782165] kauditd_printk_skb: 54 callbacks suppressed
	[  +2.257573] kauditd_printk_skb: 48 callbacks suppressed
	[  +1.209609] kauditd_printk_skb: 56 callbacks suppressed
	[  +0.000050] kauditd_printk_skb: 13 callbacks suppressed
	[  +6.717390] kauditd_printk_skb: 7 callbacks suppressed
	[  +9.538758] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [8d9643e5261749222b6ce4a8ea3461d3482d18ab904c90cb3dae2a3a46d20d98] <==
	{"level":"info","ts":"2025-05-10T17:55:01.227385Z","caller":"traceutil/trace.go:171","msg":"trace[785886573] transaction","detail":"{read_only:false; response_revision:1325; number_of_response:1; }","duration":"459.254519ms","start":"2025-05-10T17:55:00.768124Z","end":"2025-05-10T17:55:01.227378Z","steps":["trace[785886573] 'process raft request'  (duration: 458.998221ms)"],"step_count":1}
	{"level":"warn","ts":"2025-05-10T17:55:01.227459Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-05-10T17:55:00.768106Z","time spent":"459.295979ms","remote":"127.0.0.1:59458","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1324 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-05-10T17:55:01.227583Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"431.457242ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-05-10T17:55:01.227724Z","caller":"traceutil/trace.go:171","msg":"trace[2138350984] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1325; }","duration":"431.520874ms","start":"2025-05-10T17:55:00.796099Z","end":"2025-05-10T17:55:01.227620Z","steps":["trace[2138350984] 'agreement among raft nodes before linearized reading'  (duration: 431.46206ms)"],"step_count":1}
	{"level":"warn","ts":"2025-05-10T17:55:01.227792Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-05-10T17:55:00.796085Z","time spent":"431.696718ms","remote":"127.0.0.1:59480","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-05-10T17:55:01.227965Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"429.168971ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-05-10T17:55:01.228007Z","caller":"traceutil/trace.go:171","msg":"trace[1286207526] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1325; }","duration":"429.229792ms","start":"2025-05-10T17:55:00.798771Z","end":"2025-05-10T17:55:01.228000Z","steps":["trace[1286207526] 'agreement among raft nodes before linearized reading'  (duration: 429.175079ms)"],"step_count":1}
	{"level":"warn","ts":"2025-05-10T17:55:01.228025Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-05-10T17:55:00.798758Z","time spent":"429.262292ms","remote":"127.0.0.1:59480","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-05-10T17:55:01.228123Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"248.316957ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-05-10T17:55:01.228158Z","caller":"traceutil/trace.go:171","msg":"trace[822767406] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1325; }","duration":"248.351188ms","start":"2025-05-10T17:55:00.979802Z","end":"2025-05-10T17:55:01.228153Z","steps":["trace[822767406] 'agreement among raft nodes before linearized reading'  (duration: 248.306892ms)"],"step_count":1}
	{"level":"warn","ts":"2025-05-10T17:55:01.228295Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"392.672901ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-05-10T17:55:01.228650Z","caller":"traceutil/trace.go:171","msg":"trace[1457272132] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1325; }","duration":"393.044227ms","start":"2025-05-10T17:55:00.835597Z","end":"2025-05-10T17:55:01.228641Z","steps":["trace[1457272132] 'agreement among raft nodes before linearized reading'  (duration: 392.682426ms)"],"step_count":1}
	{"level":"warn","ts":"2025-05-10T17:55:01.228810Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-05-10T17:55:00.835583Z","time spent":"393.214769ms","remote":"127.0.0.1:59480","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-05-10T17:55:01.228903Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"421.367474ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-05-10T17:55:01.228937Z","caller":"traceutil/trace.go:171","msg":"trace[143554539] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1325; }","duration":"421.416629ms","start":"2025-05-10T17:55:00.807514Z","end":"2025-05-10T17:55:01.228931Z","steps":["trace[143554539] 'agreement among raft nodes before linearized reading'  (duration: 421.372433ms)"],"step_count":1}
	{"level":"warn","ts":"2025-05-10T17:55:01.228958Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-05-10T17:55:00.807501Z","time spent":"421.452965ms","remote":"127.0.0.1:59284","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2025-05-10T17:55:01.229048Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"430.23432ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-05-10T17:55:01.229082Z","caller":"traceutil/trace.go:171","msg":"trace[676360376] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1325; }","duration":"430.275279ms","start":"2025-05-10T17:55:00.798802Z","end":"2025-05-10T17:55:01.229077Z","steps":["trace[676360376] 'agreement among raft nodes before linearized reading'  (duration: 430.225584ms)"],"step_count":1}
	{"level":"warn","ts":"2025-05-10T17:55:01.229101Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-05-10T17:55:00.798799Z","time spent":"430.2985ms","remote":"127.0.0.1:59480","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-05-10T17:55:11.741834Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"312.815023ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-05-10T17:55:11.741878Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"281.798878ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-05-10T17:55:11.741927Z","caller":"traceutil/trace.go:171","msg":"trace[2139473578] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1453; }","duration":"312.968285ms","start":"2025-05-10T17:55:11.428937Z","end":"2025-05-10T17:55:11.741905Z","steps":["trace[2139473578] 'range keys from in-memory index tree'  (duration: 312.748896ms)"],"step_count":1}
	{"level":"info","ts":"2025-05-10T17:55:11.741933Z","caller":"traceutil/trace.go:171","msg":"trace[791592575] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1453; }","duration":"281.856254ms","start":"2025-05-10T17:55:11.460061Z","end":"2025-05-10T17:55:11.741917Z","steps":["trace[791592575] 'range keys from in-memory index tree'  (duration: 281.770593ms)"],"step_count":1}
	{"level":"warn","ts":"2025-05-10T17:55:11.741965Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-05-10T17:55:11.428922Z","time spent":"313.030473ms","remote":"127.0.0.1:59480","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-05-10T17:55:13.725960Z","caller":"traceutil/trace.go:171","msg":"trace[1791893911] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1457; }","duration":"106.891396ms","start":"2025-05-10T17:55:13.619056Z","end":"2025-05-10T17:55:13.725947Z","steps":["trace[1791893911] 'process raft request'  (duration: 106.803298ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:57:47 up 5 min,  0 user,  load average: 0.30, 0.98, 0.54
	Linux addons-573653 5.10.207 #1 SMP Fri May 9 03:49:24 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2024.11.2"
	
	
	==> kube-apiserver [a3e9984f17f70d54dc58f16aebd59d5838104af1a74cf4a50d966447508cc65a] <==
	I0510 17:55:20.515836       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	W0510 17:55:21.195572       1 cacher.go:183] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0510 17:55:21.952640       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:55:25.833483       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0510 17:55:26.061632       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.114.110"}
	I0510 17:55:26.070532       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E0510 17:55:38.390019       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0510 17:55:41.906850       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0510 17:55:41.906887       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0510 17:55:41.943242       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0510 17:55:41.943311       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0510 17:55:41.946723       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0510 17:55:41.946824       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0510 17:55:41.971208       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0510 17:55:41.971328       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E0510 17:55:42.007161       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"snapshot-controller\" not found]"
	I0510 17:55:42.034308       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0510 17:55:42.034526       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0510 17:55:42.947267       1 cacher.go:183] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0510 17:55:43.036228       1 cacher.go:183] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0510 17:55:43.171590       1 cacher.go:183] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0510 17:55:43.281217       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:55:59.552950       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0510 17:57:45.219050       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:57:45.222371       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.50.225"}
	
	
	==> kube-controller-manager [c5314135b24b036d09c5a5ad6fe5eae2bc592daa59fec03ed7bb93b54d11f5f8] <==
	E0510 17:55:43.948879       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:55:44.320646       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:55:45.556896       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:55:46.304420       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:55:47.005463       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:55:50.557199       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:55:50.885330       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:55:51.455141       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:56:01.628373       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:56:01.824320       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:56:03.794226       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:56:06.033563       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I0510 17:56:09.516299       1 shared_informer.go:350] "Waiting for caches to sync" controller="resource quota"
	I0510 17:56:09.516594       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 17:56:10.251934       1 shared_informer.go:350] "Waiting for caches to sync" controller="garbage collector"
	I0510 17:56:10.252005       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	E0510 17:56:22.417933       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:56:22.673806       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:56:24.533379       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:56:46.819090       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:56:56.494181       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:57:00.888333       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:57:11.925393       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:57:31.562769       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:57:38.667346       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [207c61dbf088cda58109b444a955b23d8dab0912dc10afed0c59d1a9d82de721] <==
	E0510 17:53:12.842847       1 proxier.go:732] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0510 17:53:12.915911       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.39.219"]
	E0510 17:53:12.916012       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0510 17:53:13.026865       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0510 17:53:13.026925       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0510 17:53:13.026966       1 server_linux.go:145] "Using iptables Proxier"
	I0510 17:53:13.075098       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0510 17:53:13.075480       1 server.go:516] "Version info" version="v1.33.0"
	I0510 17:53:13.075513       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 17:53:13.091095       1 config.go:199] "Starting service config controller"
	I0510 17:53:13.091133       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0510 17:53:13.091152       1 config.go:105] "Starting endpoint slice config controller"
	I0510 17:53:13.091156       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0510 17:53:13.091185       1 config.go:440] "Starting serviceCIDR config controller"
	I0510 17:53:13.091188       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0510 17:53:13.091222       1 config.go:329] "Starting node config controller"
	I0510 17:53:13.091246       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0510 17:53:13.192119       1 shared_informer.go:357] "Caches are synced" controller="node config"
	I0510 17:53:13.192161       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	I0510 17:53:13.192173       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0510 17:53:13.192129       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [21ca5d56715c241aa417e8b5cc2214306af49fb616b06ec5efd7b147159fb5ee] <==
	E0510 17:53:02.395066       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0510 17:53:02.395140       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0510 17:53:02.395189       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0510 17:53:02.395224       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0510 17:53:02.395283       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0510 17:53:02.395343       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0510 17:53:02.398992       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0510 17:53:02.399065       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0510 17:53:02.399174       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0510 17:53:02.399232       1 reflector.go:200] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0510 17:53:02.399290       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0510 17:53:02.399326       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0510 17:53:02.399380       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0510 17:53:03.237917       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0510 17:53:03.247914       1 reflector.go:200] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0510 17:53:03.441152       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0510 17:53:03.464564       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0510 17:53:03.538340       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0510 17:53:03.596375       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0510 17:53:03.640045       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0510 17:53:03.641007       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0510 17:53:03.717814       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0510 17:53:03.730554       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0510 17:53:03.747572       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I0510 17:53:06.586987       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	May 10 17:56:06 addons-573653 kubelet[1558]: I0510 17:56:06.770785    1558 scope.go:117] "RemoveContainer" containerID="c427426029a185e3856b0d8a3512968f0fb9492d2590c19211e120b1a771a8dd"
	May 10 17:56:06 addons-573653 kubelet[1558]: I0510 17:56:06.890478    1558 scope.go:117] "RemoveContainer" containerID="d2f2b6fd888d7ac844ba054c5b8fc7a3a5f89161f0a01bf2d4c3f9b454912e92"
	May 10 17:56:15 addons-573653 kubelet[1558]: E0510 17:56:15.698177    1558 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746899775697283986,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594442,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:56:15 addons-573653 kubelet[1558]: E0510 17:56:15.698217    1558 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746899775697283986,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594442,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:56:25 addons-573653 kubelet[1558]: E0510 17:56:25.701444    1558 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746899785701020599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594442,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:56:25 addons-573653 kubelet[1558]: E0510 17:56:25.701469    1558 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746899785701020599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594442,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:56:35 addons-573653 kubelet[1558]: E0510 17:56:35.706477    1558 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746899795705766622,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594442,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:56:35 addons-573653 kubelet[1558]: E0510 17:56:35.706523    1558 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746899795705766622,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594442,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:56:37 addons-573653 kubelet[1558]: I0510 17:56:37.293339    1558 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-6bgxv" secret="" err="secret \"gcp-auth\" not found"
	May 10 17:56:45 addons-573653 kubelet[1558]: E0510 17:56:45.711288    1558 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746899805710578694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594442,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:56:45 addons-573653 kubelet[1558]: E0510 17:56:45.711817    1558 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746899805710578694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594442,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:56:55 addons-573653 kubelet[1558]: E0510 17:56:55.715052    1558 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746899815714430170,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594442,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:56:55 addons-573653 kubelet[1558]: E0510 17:56:55.715561    1558 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746899815714430170,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594442,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:56:56 addons-573653 kubelet[1558]: I0510 17:56:56.293049    1558 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	May 10 17:57:05 addons-573653 kubelet[1558]: E0510 17:57:05.720926    1558 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746899825720304524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594442,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:57:05 addons-573653 kubelet[1558]: E0510 17:57:05.720984    1558 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746899825720304524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594442,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:57:15 addons-573653 kubelet[1558]: E0510 17:57:15.724179    1558 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746899835723529053,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594442,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:57:15 addons-573653 kubelet[1558]: E0510 17:57:15.724538    1558 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746899835723529053,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594442,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:57:25 addons-573653 kubelet[1558]: E0510 17:57:25.730163    1558 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746899845729454882,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594442,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:57:25 addons-573653 kubelet[1558]: E0510 17:57:25.730629    1558 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746899845729454882,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594442,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:57:35 addons-573653 kubelet[1558]: E0510 17:57:35.736380    1558 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746899855735595404,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594442,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:57:35 addons-573653 kubelet[1558]: E0510 17:57:35.736439    1558 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746899855735595404,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594442,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:57:45 addons-573653 kubelet[1558]: I0510 17:57:45.110307    1558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j5tw\" (UniqueName: \"kubernetes.io/projected/21902b6a-8c5a-40cd-a3c2-7f45f3221b6a-kube-api-access-7j5tw\") pod \"hello-world-app-7d9564db4-95ppt\" (UID: \"21902b6a-8c5a-40cd-a3c2-7f45f3221b6a\") " pod="default/hello-world-app-7d9564db4-95ppt"
	May 10 17:57:45 addons-573653 kubelet[1558]: E0510 17:57:45.739144    1558 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746899865738639048,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594442,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:57:45 addons-573653 kubelet[1558]: E0510 17:57:45.739220    1558 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746899865738639048,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594442,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [2cce31af5f8769969248bbeacd59b711b5cfbf85e133668f009ab1629adc0b6e] <==
	W0510 17:57:22.041194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:57:24.045170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:57:24.053351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:57:26.057959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:57:26.068300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:57:28.075826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:57:28.083892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:57:30.088282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:57:30.094095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:57:32.097898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:57:32.107542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:57:34.111101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:57:34.116929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:57:36.120269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:57:36.129849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:57:38.133428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:57:38.139485       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:57:40.144294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:57:40.152906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:57:42.156625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:57:42.161990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:57:44.164980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:57:44.175150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:57:46.179762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:57:46.185045       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-573653 -n addons-573653
helpers_test.go:261: (dbg) Run:  kubectl --context addons-573653 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-4v7j7 ingress-nginx-admission-patch-lqpfr
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-573653 describe pod ingress-nginx-admission-create-4v7j7 ingress-nginx-admission-patch-lqpfr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-573653 describe pod ingress-nginx-admission-create-4v7j7 ingress-nginx-admission-patch-lqpfr: exit status 1 (61.921809ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-4v7j7" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-lqpfr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-573653 describe pod ingress-nginx-admission-create-4v7j7 ingress-nginx-admission-patch-lqpfr: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-573653 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-573653 addons disable ingress-dns --alsologtostderr -v=1: (1.230309155s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-573653 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-573653 addons disable ingress --alsologtostderr -v=1: (7.787591796s)
--- FAIL: TestAddons/parallel/Ingress (151.38s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (352.19s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-581506 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0510 18:04:37.810922  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:05:05.524686  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-581506 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5m50.00614876s)

                                                
                                                
-- stdout --
	* [functional-581506] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20720
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20720-388787/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-388787/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "functional-581506" primary control-plane node in "functional-581506" cluster
	* Updating the running kvm2 "functional-581506" VM ...
	* Preparing Kubernetes v1.33.0 on CRI-O 1.29.1 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded

                                                
                                                
** /stderr **
functional_test.go:776: failed to restart minikube. args "out/minikube-linux-amd64 start -p functional-581506 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:778: restart took 5m50.006379539s for "functional-581506" cluster.
I0510 18:08:36.357757  395980 config.go:182] Loaded profile config "functional-581506": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-581506 -n functional-581506
helpers_test.go:244: <<< TestFunctional/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-581506 logs -n 25: (1.55008837s)
helpers_test.go:252: TestFunctional/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| unpause | nospam-878760 --log_dir                                                  | nospam-878760     | jenkins | v1.35.0 | 10 May 25 18:00 UTC | 10 May 25 18:00 UTC |
	|         | /tmp/nospam-878760 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-878760 --log_dir                                                  | nospam-878760     | jenkins | v1.35.0 | 10 May 25 18:00 UTC | 10 May 25 18:00 UTC |
	|         | /tmp/nospam-878760 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-878760 --log_dir                                                  | nospam-878760     | jenkins | v1.35.0 | 10 May 25 18:00 UTC | 10 May 25 18:00 UTC |
	|         | /tmp/nospam-878760 unpause                                               |                   |         |         |                     |                     |
	| stop    | nospam-878760 --log_dir                                                  | nospam-878760     | jenkins | v1.35.0 | 10 May 25 18:00 UTC | 10 May 25 18:00 UTC |
	|         | /tmp/nospam-878760 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-878760 --log_dir                                                  | nospam-878760     | jenkins | v1.35.0 | 10 May 25 18:00 UTC | 10 May 25 18:00 UTC |
	|         | /tmp/nospam-878760 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-878760 --log_dir                                                  | nospam-878760     | jenkins | v1.35.0 | 10 May 25 18:00 UTC | 10 May 25 18:00 UTC |
	|         | /tmp/nospam-878760 stop                                                  |                   |         |         |                     |                     |
	| delete  | -p nospam-878760                                                         | nospam-878760     | jenkins | v1.35.0 | 10 May 25 18:00 UTC | 10 May 25 18:00 UTC |
	| start   | -p functional-581506                                                     | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:00 UTC | 10 May 25 18:02 UTC |
	|         | --memory=4000                                                            |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                                                 |                   |         |         |                     |                     |
	|         | --container-runtime=crio                                                 |                   |         |         |                     |                     |
	| start   | -p functional-581506                                                     | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:02 UTC | 10 May 25 18:02 UTC |
	|         | --alsologtostderr -v=8                                                   |                   |         |         |                     |                     |
	| cache   | functional-581506 cache add                                              | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:02 UTC | 10 May 25 18:02 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | functional-581506 cache add                                              | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:02 UTC | 10 May 25 18:02 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | functional-581506 cache add                                              | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:02 UTC | 10 May 25 18:02 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-581506 cache add                                              | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:02 UTC | 10 May 25 18:02 UTC |
	|         | minikube-local-cache-test:functional-581506                              |                   |         |         |                     |                     |
	| cache   | functional-581506 cache delete                                           | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:02 UTC | 10 May 25 18:02 UTC |
	|         | minikube-local-cache-test:functional-581506                              |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.35.0 | 10 May 25 18:02 UTC | 10 May 25 18:02 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | list                                                                     | minikube          | jenkins | v1.35.0 | 10 May 25 18:02 UTC | 10 May 25 18:02 UTC |
	| ssh     | functional-581506 ssh sudo                                               | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:02 UTC | 10 May 25 18:02 UTC |
	|         | crictl images                                                            |                   |         |         |                     |                     |
	| ssh     | functional-581506                                                        | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:02 UTC | 10 May 25 18:02 UTC |
	|         | ssh sudo crictl rmi                                                      |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| ssh     | functional-581506 ssh                                                    | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:02 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-581506 cache reload                                           | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:02 UTC | 10 May 25 18:02 UTC |
	| ssh     | functional-581506 ssh                                                    | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:02 UTC | 10 May 25 18:02 UTC |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.35.0 | 10 May 25 18:02 UTC | 10 May 25 18:02 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.35.0 | 10 May 25 18:02 UTC | 10 May 25 18:02 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| kubectl | functional-581506 kubectl --                                             | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:02 UTC | 10 May 25 18:02 UTC |
	|         | --context functional-581506                                              |                   |         |         |                     |                     |
	|         | get pods                                                                 |                   |         |         |                     |                     |
	| start   | -p functional-581506                                                     | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:02 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |         |         |                     |                     |
	|         | --wait=all                                                               |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/05/10 18:02:46
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0510 18:02:46.396513  402035 out.go:345] Setting OutFile to fd 1 ...
	I0510 18:02:46.396636  402035 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 18:02:46.396640  402035 out.go:358] Setting ErrFile to fd 2...
	I0510 18:02:46.396643  402035 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 18:02:46.396841  402035 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-388787/.minikube/bin
	I0510 18:02:46.397369  402035 out.go:352] Setting JSON to false
	I0510 18:02:46.398311  402035 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27914,"bootTime":1746872252,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 18:02:46.398421  402035 start.go:140] virtualization: kvm guest
	I0510 18:02:46.400743  402035 out.go:177] * [functional-581506] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 18:02:46.402186  402035 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 18:02:46.402177  402035 notify.go:220] Checking for updates...
	I0510 18:02:46.403510  402035 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 18:02:46.405219  402035 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-388787/kubeconfig
	I0510 18:02:46.406775  402035 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-388787/.minikube
	I0510 18:02:46.408169  402035 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 18:02:46.409488  402035 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 18:02:46.411314  402035 config.go:182] Loaded profile config "functional-581506": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 18:02:46.411402  402035 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 18:02:46.411895  402035 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 18:02:46.411958  402035 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:02:46.428015  402035 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42895
	I0510 18:02:46.428521  402035 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:02:46.429033  402035 main.go:141] libmachine: Using API Version  1
	I0510 18:02:46.429050  402035 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:02:46.429423  402035 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:02:46.429597  402035 main.go:141] libmachine: (functional-581506) Calling .DriverName
	I0510 18:02:46.464202  402035 out.go:177] * Using the kvm2 driver based on existing profile
	I0510 18:02:46.465611  402035 start.go:304] selected driver: kvm2
	I0510 18:02:46.465621  402035 start.go:908] validating driver "kvm2" against &{Name:functional-581506 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.33.0 ClusterName:functional-581506 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8441 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 18:02:46.465726  402035 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 18:02:46.466055  402035 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 18:02:46.466154  402035 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20720-388787/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0510 18:02:46.483313  402035 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0510 18:02:46.484300  402035 start_flags.go:975] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0510 18:02:46.484336  402035 cni.go:84] Creating CNI manager for ""
	I0510 18:02:46.484393  402035 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0510 18:02:46.484445  402035 start.go:347] cluster config:
	{Name:functional-581506 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:functional-581506 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8441 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 18:02:46.484546  402035 iso.go:125] acquiring lock: {Name:mk19640015999219180c6685480547adf0c02201 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 18:02:46.486929  402035 out.go:177] * Starting "functional-581506" primary control-plane node in "functional-581506" cluster
	I0510 18:02:46.488381  402035 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
	I0510 18:02:46.488424  402035 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4
	I0510 18:02:46.488433  402035 cache.go:56] Caching tarball of preloaded images
	I0510 18:02:46.488558  402035 preload.go:172] Found /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0510 18:02:46.488566  402035 cache.go:59] Finished verifying existence of preloaded tar for v1.33.0 on crio
	I0510 18:02:46.488662  402035 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/config.json ...
	I0510 18:02:46.488872  402035 start.go:360] acquireMachinesLock for functional-581506: {Name:mk11499d7756d503a7a24339ad1a7f9ab9dc0fab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0510 18:02:46.488936  402035 start.go:364] duration metric: took 49.209µs to acquireMachinesLock for "functional-581506"
	I0510 18:02:46.488949  402035 start.go:96] Skipping create...Using existing machine configuration
	I0510 18:02:46.488953  402035 fix.go:54] fixHost starting: 
	I0510 18:02:46.489257  402035 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 18:02:46.489298  402035 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:02:46.505903  402035 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46635
	I0510 18:02:46.506581  402035 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:02:46.507080  402035 main.go:141] libmachine: Using API Version  1
	I0510 18:02:46.507090  402035 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:02:46.507470  402035 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:02:46.507695  402035 main.go:141] libmachine: (functional-581506) Calling .DriverName
	I0510 18:02:46.507905  402035 main.go:141] libmachine: (functional-581506) Calling .GetState
	I0510 18:02:46.509827  402035 fix.go:112] recreateIfNeeded on functional-581506: state=Running err=<nil>
	W0510 18:02:46.509841  402035 fix.go:138] unexpected machine state, will restart: <nil>
	I0510 18:02:46.512283  402035 out.go:177] * Updating the running kvm2 "functional-581506" VM ...
	I0510 18:02:46.513904  402035 machine.go:93] provisionDockerMachine start ...
	I0510 18:02:46.513940  402035 main.go:141] libmachine: (functional-581506) Calling .DriverName
	I0510 18:02:46.514326  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHHostname
	I0510 18:02:46.517256  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:46.517672  402035 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
	I0510 18:02:46.517709  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:46.517917  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHPort
	I0510 18:02:46.518128  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:46.518280  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:46.518424  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHUsername
	I0510 18:02:46.518549  402035 main.go:141] libmachine: Using SSH client type: native
	I0510 18:02:46.518772  402035 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I0510 18:02:46.518777  402035 main.go:141] libmachine: About to run SSH command:
	hostname
	I0510 18:02:46.640153  402035 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-581506
	
	I0510 18:02:46.640174  402035 main.go:141] libmachine: (functional-581506) Calling .GetMachineName
	I0510 18:02:46.640441  402035 buildroot.go:166] provisioning hostname "functional-581506"
	I0510 18:02:46.640464  402035 main.go:141] libmachine: (functional-581506) Calling .GetMachineName
	I0510 18:02:46.640667  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHHostname
	I0510 18:02:46.643291  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:46.643617  402035 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
	I0510 18:02:46.643642  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:46.643791  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHPort
	I0510 18:02:46.644010  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:46.644246  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:46.644473  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHUsername
	I0510 18:02:46.644671  402035 main.go:141] libmachine: Using SSH client type: native
	I0510 18:02:46.644975  402035 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I0510 18:02:46.644986  402035 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-581506 && echo "functional-581506" | sudo tee /etc/hostname
	I0510 18:02:46.783110  402035 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-581506
	
	I0510 18:02:46.783132  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHHostname
	I0510 18:02:46.786450  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:46.786777  402035 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
	I0510 18:02:46.786821  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:46.787057  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHPort
	I0510 18:02:46.787283  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:46.787424  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:46.787531  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHUsername
	I0510 18:02:46.787679  402035 main.go:141] libmachine: Using SSH client type: native
	I0510 18:02:46.787970  402035 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I0510 18:02:46.787987  402035 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-581506' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-581506/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-581506' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0510 18:02:46.908762  402035 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0510 18:02:46.908797  402035 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20720-388787/.minikube CaCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20720-388787/.minikube}
	I0510 18:02:46.908841  402035 buildroot.go:174] setting up certificates
	I0510 18:02:46.908855  402035 provision.go:84] configureAuth start
	I0510 18:02:46.908864  402035 main.go:141] libmachine: (functional-581506) Calling .GetMachineName
	I0510 18:02:46.909218  402035 main.go:141] libmachine: (functional-581506) Calling .GetIP
	I0510 18:02:46.911981  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:46.912317  402035 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
	I0510 18:02:46.912335  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:46.912579  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHHostname
	I0510 18:02:46.915330  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:46.915770  402035 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
	I0510 18:02:46.915808  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:46.915943  402035 provision.go:143] copyHostCerts
	I0510 18:02:46.916005  402035 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem, removing ...
	I0510 18:02:46.916026  402035 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem
	I0510 18:02:46.916089  402035 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem (1078 bytes)
	I0510 18:02:46.916183  402035 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem, removing ...
	I0510 18:02:46.916187  402035 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem
	I0510 18:02:46.916210  402035 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem (1123 bytes)
	I0510 18:02:46.916258  402035 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem, removing ...
	I0510 18:02:46.916261  402035 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem
	I0510 18:02:46.916283  402035 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem (1675 bytes)
	I0510 18:02:46.916322  402035 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem org=jenkins.functional-581506 san=[127.0.0.1 192.168.39.52 functional-581506 localhost minikube]
	I0510 18:02:47.231951  402035 provision.go:177] copyRemoteCerts
	I0510 18:02:47.232007  402035 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0510 18:02:47.232032  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHHostname
	I0510 18:02:47.235562  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:47.235996  402035 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
	I0510 18:02:47.236028  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:47.236244  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHPort
	I0510 18:02:47.236501  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:47.236684  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHUsername
	I0510 18:02:47.236859  402035 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/functional-581506/id_rsa Username:docker}
	I0510 18:02:47.328493  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0510 18:02:47.362929  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0510 18:02:47.402301  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0510 18:02:47.436276  402035 provision.go:87] duration metric: took 527.405123ms to configureAuth
	I0510 18:02:47.436303  402035 buildroot.go:189] setting minikube options for container-runtime
	I0510 18:02:47.436596  402035 config.go:182] Loaded profile config "functional-581506": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 18:02:47.436690  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHHostname
	I0510 18:02:47.440022  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:47.440415  402035 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
	I0510 18:02:47.440441  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:47.440681  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHPort
	I0510 18:02:47.440965  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:47.441340  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:47.441565  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHUsername
	I0510 18:02:47.441774  402035 main.go:141] libmachine: Using SSH client type: native
	I0510 18:02:47.442138  402035 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I0510 18:02:47.442150  402035 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0510 18:02:53.140976  402035 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0510 18:02:53.140992  402035 machine.go:96] duration metric: took 6.627069114s to provisionDockerMachine
	I0510 18:02:53.141003  402035 start.go:293] postStartSetup for "functional-581506" (driver="kvm2")
	I0510 18:02:53.141012  402035 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0510 18:02:53.141027  402035 main.go:141] libmachine: (functional-581506) Calling .DriverName
	I0510 18:02:53.141384  402035 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0510 18:02:53.141411  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHHostname
	I0510 18:02:53.144494  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:53.144834  402035 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
	I0510 18:02:53.144853  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:53.144999  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHPort
	I0510 18:02:53.145178  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:53.145322  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHUsername
	I0510 18:02:53.145457  402035 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/functional-581506/id_rsa Username:docker}
	I0510 18:02:53.240441  402035 ssh_runner.go:195] Run: cat /etc/os-release
	I0510 18:02:53.245712  402035 info.go:137] Remote host: Buildroot 2024.11.2
	I0510 18:02:53.245743  402035 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-388787/.minikube/addons for local assets ...
	I0510 18:02:53.245813  402035 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-388787/.minikube/files for local assets ...
	I0510 18:02:53.245880  402035 filesync.go:149] local asset: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem -> 3959802.pem in /etc/ssl/certs
	I0510 18:02:53.245953  402035 filesync.go:149] local asset: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/test/nested/copy/395980/hosts -> hosts in /etc/test/nested/copy/395980
	I0510 18:02:53.245988  402035 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/395980
	I0510 18:02:53.258624  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem --> /etc/ssl/certs/3959802.pem (1708 bytes)
	I0510 18:02:53.295954  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/test/nested/copy/395980/hosts --> /etc/test/nested/copy/395980/hosts (40 bytes)
	I0510 18:02:53.327666  402035 start.go:296] duration metric: took 186.648319ms for postStartSetup
	I0510 18:02:53.327715  402035 fix.go:56] duration metric: took 6.838760767s for fixHost
	I0510 18:02:53.327740  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHHostname
	I0510 18:02:53.330484  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:53.330859  402035 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
	I0510 18:02:53.330890  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:53.331009  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHPort
	I0510 18:02:53.331230  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:53.331412  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:53.331544  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHUsername
	I0510 18:02:53.331662  402035 main.go:141] libmachine: Using SSH client type: native
	I0510 18:02:53.331877  402035 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I0510 18:02:53.331882  402035 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0510 18:02:53.453061  402035 main.go:141] libmachine: SSH cmd err, output: <nil>: 1746900173.447130487
	
	I0510 18:02:53.453092  402035 fix.go:216] guest clock: 1746900173.447130487
	I0510 18:02:53.453099  402035 fix.go:229] Guest: 2025-05-10 18:02:53.447130487 +0000 UTC Remote: 2025-05-10 18:02:53.327719446 +0000 UTC m=+6.971359045 (delta=119.411041ms)
	I0510 18:02:53.453119  402035 fix.go:200] guest clock delta is within tolerance: 119.411041ms
	I0510 18:02:53.453123  402035 start.go:83] releasing machines lock for "functional-581506", held for 6.964180893s
	I0510 18:02:53.453145  402035 main.go:141] libmachine: (functional-581506) Calling .DriverName
	I0510 18:02:53.453448  402035 main.go:141] libmachine: (functional-581506) Calling .GetIP
	I0510 18:02:53.456220  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:53.456476  402035 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
	I0510 18:02:53.456494  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:53.456627  402035 main.go:141] libmachine: (functional-581506) Calling .DriverName
	I0510 18:02:53.457205  402035 main.go:141] libmachine: (functional-581506) Calling .DriverName
	I0510 18:02:53.457369  402035 main.go:141] libmachine: (functional-581506) Calling .DriverName
	I0510 18:02:53.457461  402035 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0510 18:02:53.457506  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHHostname
	I0510 18:02:53.457607  402035 ssh_runner.go:195] Run: cat /version.json
	I0510 18:02:53.457625  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHHostname
	I0510 18:02:53.460159  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:53.460383  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:53.460534  402035 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
	I0510 18:02:53.460568  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:53.460745  402035 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
	I0510 18:02:53.460761  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:53.460773  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHPort
	I0510 18:02:53.460958  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:53.460967  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHPort
	I0510 18:02:53.461130  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHUsername
	I0510 18:02:53.461146  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:53.461326  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHUsername
	I0510 18:02:53.461314  402035 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/functional-581506/id_rsa Username:docker}
	I0510 18:02:53.461447  402035 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/functional-581506/id_rsa Username:docker}
	I0510 18:02:53.559403  402035 ssh_runner.go:195] Run: systemctl --version
	I0510 18:02:53.582132  402035 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0510 18:02:53.770630  402035 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0510 18:02:53.783161  402035 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0510 18:02:53.783285  402035 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0510 18:02:53.798993  402035 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0510 18:02:53.799013  402035 start.go:495] detecting cgroup driver to use...
	I0510 18:02:53.799097  402035 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0510 18:02:53.823538  402035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0510 18:02:53.848708  402035 docker.go:225] disabling cri-docker service (if available) ...
	I0510 18:02:53.848771  402035 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0510 18:02:53.880475  402035 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0510 18:02:53.909205  402035 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0510 18:02:54.228229  402035 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0510 18:02:54.462507  402035 docker.go:241] disabling docker service ...
	I0510 18:02:54.462575  402035 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0510 18:02:54.497169  402035 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0510 18:02:54.516357  402035 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0510 18:02:54.753088  402035 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0510 18:02:54.940449  402035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0510 18:02:54.956825  402035 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0510 18:02:54.980731  402035 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0510 18:02:54.980784  402035 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 18:02:54.993371  402035 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0510 18:02:54.993440  402035 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 18:02:55.006052  402035 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 18:02:55.018197  402035 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 18:02:55.030433  402035 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0510 18:02:55.045006  402035 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 18:02:55.057444  402035 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 18:02:55.071727  402035 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 18:02:55.084200  402035 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0510 18:02:55.096230  402035 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0510 18:02:55.107855  402035 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 18:02:55.290042  402035 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0510 18:04:25.856147  402035 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.566058413s)
	I0510 18:04:25.856185  402035 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0510 18:04:25.856270  402035 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0510 18:04:25.863129  402035 start.go:563] Will wait 60s for crictl version
	I0510 18:04:25.863197  402035 ssh_runner.go:195] Run: which crictl
	I0510 18:04:25.868051  402035 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0510 18:04:25.911506  402035 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0510 18:04:25.911578  402035 ssh_runner.go:195] Run: crio --version
	I0510 18:04:25.945197  402035 ssh_runner.go:195] Run: crio --version
	I0510 18:04:25.980379  402035 out.go:177] * Preparing Kubernetes v1.33.0 on CRI-O 1.29.1 ...
	I0510 18:04:25.982219  402035 main.go:141] libmachine: (functional-581506) Calling .GetIP
	I0510 18:04:25.985326  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:04:25.985730  402035 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
	I0510 18:04:25.985751  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:04:25.985941  402035 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0510 18:04:25.993435  402035 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0510 18:04:25.995308  402035 kubeadm.go:875] updating cluster {Name:functional-581506 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.33.0 ClusterName:functional-581506 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8441 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0510 18:04:25.995446  402035 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
	I0510 18:04:25.995518  402035 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 18:04:26.045932  402035 crio.go:514] all images are preloaded for cri-o runtime.
	I0510 18:04:26.045946  402035 crio.go:433] Images already preloaded, skipping extraction
	I0510 18:04:26.046014  402035 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 18:04:26.085235  402035 crio.go:514] all images are preloaded for cri-o runtime.
	I0510 18:04:26.085254  402035 cache_images.go:84] Images are preloaded, skipping loading
	I0510 18:04:26.085265  402035 kubeadm.go:926] updating node { 192.168.39.52 8441 v1.33.0 crio true true} ...
	I0510 18:04:26.085431  402035 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-581506 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.52
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.0 ClusterName:functional-581506 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0510 18:04:26.085506  402035 ssh_runner.go:195] Run: crio config
	I0510 18:04:26.138253  402035 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0510 18:04:26.138281  402035 cni.go:84] Creating CNI manager for ""
	I0510 18:04:26.138297  402035 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0510 18:04:26.138305  402035 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0510 18:04:26.138331  402035 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.52 APIServerPort:8441 KubernetesVersion:v1.33.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-581506 NodeName:functional-581506 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.52"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.52 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts
:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0510 18:04:26.138459  402035 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.52
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-581506"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.52"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.52"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0510 18:04:26.138527  402035 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.0
	I0510 18:04:26.152410  402035 binaries.go:44] Found k8s binaries, skipping transfer
	I0510 18:04:26.152484  402035 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0510 18:04:26.164608  402035 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0510 18:04:26.187091  402035 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0510 18:04:26.208040  402035 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2144 bytes)
	I0510 18:04:26.231151  402035 ssh_runner.go:195] Run: grep 192.168.39.52	control-plane.minikube.internal$ /etc/hosts
	I0510 18:04:26.235726  402035 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 18:04:26.416698  402035 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 18:04:26.435417  402035 certs.go:68] Setting up /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506 for IP: 192.168.39.52
	I0510 18:04:26.435435  402035 certs.go:194] generating shared ca certs ...
	I0510 18:04:26.435455  402035 certs.go:226] acquiring lock for ca certs: {Name:mk8db74782205da4ac57ef815dd495cda255251a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 18:04:26.435657  402035 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20720-388787/.minikube/ca.key
	I0510 18:04:26.435715  402035 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.key
	I0510 18:04:26.435724  402035 certs.go:256] generating profile certs ...
	I0510 18:04:26.435807  402035 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.key
	I0510 18:04:26.435852  402035 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/apiserver.key.e77f3034
	I0510 18:04:26.435879  402035 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/proxy-client.key
	I0510 18:04:26.435998  402035 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980.pem (1338 bytes)
	W0510 18:04:26.436022  402035 certs.go:480] ignoring /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980_empty.pem, impossibly tiny 0 bytes
	I0510 18:04:26.436028  402035 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem (1679 bytes)
	I0510 18:04:26.436049  402035 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem (1078 bytes)
	I0510 18:04:26.436067  402035 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem (1123 bytes)
	I0510 18:04:26.436088  402035 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem (1675 bytes)
	I0510 18:04:26.436136  402035 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem (1708 bytes)
	I0510 18:04:26.436850  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0510 18:04:26.469054  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0510 18:04:26.499255  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0510 18:04:26.529739  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0510 18:04:26.561946  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0510 18:04:26.595162  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0510 18:04:26.627840  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0510 18:04:26.659449  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0510 18:04:26.693269  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980.pem --> /usr/share/ca-certificates/395980.pem (1338 bytes)
	I0510 18:04:26.724816  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem --> /usr/share/ca-certificates/3959802.pem (1708 bytes)
	I0510 18:04:26.754834  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0510 18:04:26.787011  402035 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0510 18:04:26.809227  402035 ssh_runner.go:195] Run: openssl version
	I0510 18:04:26.817671  402035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3959802.pem && ln -fs /usr/share/ca-certificates/3959802.pem /etc/ssl/certs/3959802.pem"
	I0510 18:04:26.831583  402035 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3959802.pem
	I0510 18:04:26.837165  402035 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 10 18:00 /usr/share/ca-certificates/3959802.pem
	I0510 18:04:26.837228  402035 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3959802.pem
	I0510 18:04:26.845401  402035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3959802.pem /etc/ssl/certs/3ec20f2e.0"
	I0510 18:04:26.857985  402035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0510 18:04:26.871727  402035 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0510 18:04:26.877551  402035 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 10 17:52 /usr/share/ca-certificates/minikubeCA.pem
	I0510 18:04:26.877655  402035 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0510 18:04:26.885597  402035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0510 18:04:26.897966  402035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/395980.pem && ln -fs /usr/share/ca-certificates/395980.pem /etc/ssl/certs/395980.pem"
	I0510 18:04:26.911449  402035 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/395980.pem
	I0510 18:04:26.917136  402035 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 10 18:00 /usr/share/ca-certificates/395980.pem
	I0510 18:04:26.917209  402035 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/395980.pem
	I0510 18:04:26.924808  402035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/395980.pem /etc/ssl/certs/51391683.0"
	I0510 18:04:26.957285  402035 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0510 18:04:26.969150  402035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0510 18:04:26.987736  402035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0510 18:04:27.006182  402035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0510 18:04:27.022469  402035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0510 18:04:27.031936  402035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0510 18:04:27.044701  402035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0510 18:04:27.061968  402035 kubeadm.go:392] StartCluster: {Name:functional-581506 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33
.0 ClusterName:functional-581506 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8441 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStri
ng:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 18:04:27.062052  402035 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0510 18:04:27.062122  402035 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0510 18:04:27.165135  402035 cri.go:89] found id: "67bef24b725ebf7a2b7f343d7456516d6b5de38118f9cf48e7d70d9146ce2087"
	I0510 18:04:27.165148  402035 cri.go:89] found id: "2b4eccacbeea6a58cc9c575f2c2bf5f8297029f9c9d2a9264bcf3e69644b4c28"
	I0510 18:04:27.165151  402035 cri.go:89] found id: "9ddf6914642a098d580c48db641460c4197df74a06bf7008e362f610f185934d"
	I0510 18:04:27.165153  402035 cri.go:89] found id: "08d812eb972925640e90642a5458269dea94298436a73e78a578d0bfe369daaf"
	I0510 18:04:27.165155  402035 cri.go:89] found id: "74fd0b7de642965eb7e03cf324017cb2195034685758e46efbd5e6997aba9ae5"
	I0510 18:04:27.165157  402035 cri.go:89] found id: "5879bea6c3a25517766471c3eec758ce0c6d853db7055e1f3505263a674ed969"
	I0510 18:04:27.165158  402035 cri.go:89] found id: "bc42d63e6220a437de1d056d765ed97df2e6978798401b10283f61c7b1bc895b"
	I0510 18:04:27.165160  402035 cri.go:89] found id: ""
	I0510 18:04:27.165206  402035 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-581506 -n functional-581506
helpers_test.go:261: (dbg) Run:  kubectl --context functional-581506 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/ExtraConfig FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/ExtraConfig (352.19s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (2.18s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-581506 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler is not Ready: {Phase:Running Conditions:[{Type:PodReadyToStartContainers Status:False} {Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.39.52 PodIP:192.168.39.52 StartTime:2025-05-10 18:04:30 +0000 UTC ContainerStatuses:[{Name:kube-scheduler State:{Waiting:<nil> Running:<nil> Terminated:0xc0005361c0} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:1 Image:registry.k8s.io/kube-scheduler:v1.33.0 ImageID:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4 ContainerID:cri-o://bc42d63e6220a437de1d056d765ed97df2e6978798401b10283f61c7b1bc895b}]}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-581506 -n functional-581506
helpers_test.go:244: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-581506 logs -n 25: (1.499958149s)
helpers_test.go:252: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| unpause | nospam-878760 --log_dir                                                  | nospam-878760     | jenkins | v1.35.0 | 10 May 25 18:00 UTC | 10 May 25 18:00 UTC |
	|         | /tmp/nospam-878760 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-878760 --log_dir                                                  | nospam-878760     | jenkins | v1.35.0 | 10 May 25 18:00 UTC | 10 May 25 18:00 UTC |
	|         | /tmp/nospam-878760 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-878760 --log_dir                                                  | nospam-878760     | jenkins | v1.35.0 | 10 May 25 18:00 UTC | 10 May 25 18:00 UTC |
	|         | /tmp/nospam-878760 unpause                                               |                   |         |         |                     |                     |
	| stop    | nospam-878760 --log_dir                                                  | nospam-878760     | jenkins | v1.35.0 | 10 May 25 18:00 UTC | 10 May 25 18:00 UTC |
	|         | /tmp/nospam-878760 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-878760 --log_dir                                                  | nospam-878760     | jenkins | v1.35.0 | 10 May 25 18:00 UTC | 10 May 25 18:00 UTC |
	|         | /tmp/nospam-878760 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-878760 --log_dir                                                  | nospam-878760     | jenkins | v1.35.0 | 10 May 25 18:00 UTC | 10 May 25 18:00 UTC |
	|         | /tmp/nospam-878760 stop                                                  |                   |         |         |                     |                     |
	| delete  | -p nospam-878760                                                         | nospam-878760     | jenkins | v1.35.0 | 10 May 25 18:00 UTC | 10 May 25 18:00 UTC |
	| start   | -p functional-581506                                                     | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:00 UTC | 10 May 25 18:02 UTC |
	|         | --memory=4000                                                            |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                                                 |                   |         |         |                     |                     |
	|         | --container-runtime=crio                                                 |                   |         |         |                     |                     |
	| start   | -p functional-581506                                                     | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:02 UTC | 10 May 25 18:02 UTC |
	|         | --alsologtostderr -v=8                                                   |                   |         |         |                     |                     |
	| cache   | functional-581506 cache add                                              | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:02 UTC | 10 May 25 18:02 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | functional-581506 cache add                                              | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:02 UTC | 10 May 25 18:02 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | functional-581506 cache add                                              | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:02 UTC | 10 May 25 18:02 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-581506 cache add                                              | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:02 UTC | 10 May 25 18:02 UTC |
	|         | minikube-local-cache-test:functional-581506                              |                   |         |         |                     |                     |
	| cache   | functional-581506 cache delete                                           | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:02 UTC | 10 May 25 18:02 UTC |
	|         | minikube-local-cache-test:functional-581506                              |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.35.0 | 10 May 25 18:02 UTC | 10 May 25 18:02 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | list                                                                     | minikube          | jenkins | v1.35.0 | 10 May 25 18:02 UTC | 10 May 25 18:02 UTC |
	| ssh     | functional-581506 ssh sudo                                               | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:02 UTC | 10 May 25 18:02 UTC |
	|         | crictl images                                                            |                   |         |         |                     |                     |
	| ssh     | functional-581506                                                        | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:02 UTC | 10 May 25 18:02 UTC |
	|         | ssh sudo crictl rmi                                                      |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| ssh     | functional-581506 ssh                                                    | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:02 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-581506 cache reload                                           | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:02 UTC | 10 May 25 18:02 UTC |
	| ssh     | functional-581506 ssh                                                    | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:02 UTC | 10 May 25 18:02 UTC |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.35.0 | 10 May 25 18:02 UTC | 10 May 25 18:02 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.35.0 | 10 May 25 18:02 UTC | 10 May 25 18:02 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| kubectl | functional-581506 kubectl --                                             | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:02 UTC | 10 May 25 18:02 UTC |
	|         | --context functional-581506                                              |                   |         |         |                     |                     |
	|         | get pods                                                                 |                   |         |         |                     |                     |
	| start   | -p functional-581506                                                     | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:02 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |         |         |                     |                     |
	|         | --wait=all                                                               |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/05/10 18:02:46
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0510 18:02:46.396513  402035 out.go:345] Setting OutFile to fd 1 ...
	I0510 18:02:46.396636  402035 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 18:02:46.396640  402035 out.go:358] Setting ErrFile to fd 2...
	I0510 18:02:46.396643  402035 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 18:02:46.396841  402035 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-388787/.minikube/bin
	I0510 18:02:46.397369  402035 out.go:352] Setting JSON to false
	I0510 18:02:46.398311  402035 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27914,"bootTime":1746872252,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 18:02:46.398421  402035 start.go:140] virtualization: kvm guest
	I0510 18:02:46.400743  402035 out.go:177] * [functional-581506] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 18:02:46.402186  402035 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 18:02:46.402177  402035 notify.go:220] Checking for updates...
	I0510 18:02:46.403510  402035 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 18:02:46.405219  402035 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-388787/kubeconfig
	I0510 18:02:46.406775  402035 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-388787/.minikube
	I0510 18:02:46.408169  402035 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 18:02:46.409488  402035 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 18:02:46.411314  402035 config.go:182] Loaded profile config "functional-581506": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 18:02:46.411402  402035 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 18:02:46.411895  402035 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 18:02:46.411958  402035 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:02:46.428015  402035 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42895
	I0510 18:02:46.428521  402035 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:02:46.429033  402035 main.go:141] libmachine: Using API Version  1
	I0510 18:02:46.429050  402035 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:02:46.429423  402035 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:02:46.429597  402035 main.go:141] libmachine: (functional-581506) Calling .DriverName
	I0510 18:02:46.464202  402035 out.go:177] * Using the kvm2 driver based on existing profile
	I0510 18:02:46.465611  402035 start.go:304] selected driver: kvm2
	I0510 18:02:46.465621  402035 start.go:908] validating driver "kvm2" against &{Name:functional-581506 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.33.0 ClusterName:functional-581506 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8441 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 18:02:46.465726  402035 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 18:02:46.466055  402035 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 18:02:46.466154  402035 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20720-388787/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0510 18:02:46.483313  402035 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0510 18:02:46.484300  402035 start_flags.go:975] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0510 18:02:46.484336  402035 cni.go:84] Creating CNI manager for ""
	I0510 18:02:46.484393  402035 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0510 18:02:46.484445  402035 start.go:347] cluster config:
	{Name:functional-581506 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:functional-581506 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8441 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 18:02:46.484546  402035 iso.go:125] acquiring lock: {Name:mk19640015999219180c6685480547adf0c02201 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 18:02:46.486929  402035 out.go:177] * Starting "functional-581506" primary control-plane node in "functional-581506" cluster
	I0510 18:02:46.488381  402035 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
	I0510 18:02:46.488424  402035 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4
	I0510 18:02:46.488433  402035 cache.go:56] Caching tarball of preloaded images
	I0510 18:02:46.488558  402035 preload.go:172] Found /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0510 18:02:46.488566  402035 cache.go:59] Finished verifying existence of preloaded tar for v1.33.0 on crio
	I0510 18:02:46.488662  402035 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/config.json ...
	I0510 18:02:46.488872  402035 start.go:360] acquireMachinesLock for functional-581506: {Name:mk11499d7756d503a7a24339ad1a7f9ab9dc0fab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0510 18:02:46.488936  402035 start.go:364] duration metric: took 49.209µs to acquireMachinesLock for "functional-581506"
	I0510 18:02:46.488949  402035 start.go:96] Skipping create...Using existing machine configuration
	I0510 18:02:46.488953  402035 fix.go:54] fixHost starting: 
	I0510 18:02:46.489257  402035 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 18:02:46.489298  402035 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:02:46.505903  402035 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46635
	I0510 18:02:46.506581  402035 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:02:46.507080  402035 main.go:141] libmachine: Using API Version  1
	I0510 18:02:46.507090  402035 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:02:46.507470  402035 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:02:46.507695  402035 main.go:141] libmachine: (functional-581506) Calling .DriverName
	I0510 18:02:46.507905  402035 main.go:141] libmachine: (functional-581506) Calling .GetState
	I0510 18:02:46.509827  402035 fix.go:112] recreateIfNeeded on functional-581506: state=Running err=<nil>
	W0510 18:02:46.509841  402035 fix.go:138] unexpected machine state, will restart: <nil>
	I0510 18:02:46.512283  402035 out.go:177] * Updating the running kvm2 "functional-581506" VM ...
	I0510 18:02:46.513904  402035 machine.go:93] provisionDockerMachine start ...
	I0510 18:02:46.513940  402035 main.go:141] libmachine: (functional-581506) Calling .DriverName
	I0510 18:02:46.514326  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHHostname
	I0510 18:02:46.517256  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:46.517672  402035 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
	I0510 18:02:46.517709  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:46.517917  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHPort
	I0510 18:02:46.518128  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:46.518280  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:46.518424  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHUsername
	I0510 18:02:46.518549  402035 main.go:141] libmachine: Using SSH client type: native
	I0510 18:02:46.518772  402035 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I0510 18:02:46.518777  402035 main.go:141] libmachine: About to run SSH command:
	hostname
	I0510 18:02:46.640153  402035 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-581506
	
	I0510 18:02:46.640174  402035 main.go:141] libmachine: (functional-581506) Calling .GetMachineName
	I0510 18:02:46.640441  402035 buildroot.go:166] provisioning hostname "functional-581506"
	I0510 18:02:46.640464  402035 main.go:141] libmachine: (functional-581506) Calling .GetMachineName
	I0510 18:02:46.640667  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHHostname
	I0510 18:02:46.643291  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:46.643617  402035 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
	I0510 18:02:46.643642  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:46.643791  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHPort
	I0510 18:02:46.644010  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:46.644246  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:46.644473  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHUsername
	I0510 18:02:46.644671  402035 main.go:141] libmachine: Using SSH client type: native
	I0510 18:02:46.644975  402035 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I0510 18:02:46.644986  402035 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-581506 && echo "functional-581506" | sudo tee /etc/hostname
	I0510 18:02:46.783110  402035 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-581506
	
	I0510 18:02:46.783132  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHHostname
	I0510 18:02:46.786450  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:46.786777  402035 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
	I0510 18:02:46.786821  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:46.787057  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHPort
	I0510 18:02:46.787283  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:46.787424  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:46.787531  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHUsername
	I0510 18:02:46.787679  402035 main.go:141] libmachine: Using SSH client type: native
	I0510 18:02:46.787970  402035 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I0510 18:02:46.787987  402035 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-581506' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-581506/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-581506' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0510 18:02:46.908762  402035 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0510 18:02:46.908797  402035 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20720-388787/.minikube CaCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20720-388787/.minikube}
	I0510 18:02:46.908841  402035 buildroot.go:174] setting up certificates
	I0510 18:02:46.908855  402035 provision.go:84] configureAuth start
	I0510 18:02:46.908864  402035 main.go:141] libmachine: (functional-581506) Calling .GetMachineName
	I0510 18:02:46.909218  402035 main.go:141] libmachine: (functional-581506) Calling .GetIP
	I0510 18:02:46.911981  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:46.912317  402035 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
	I0510 18:02:46.912335  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:46.912579  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHHostname
	I0510 18:02:46.915330  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:46.915770  402035 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
	I0510 18:02:46.915808  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:46.915943  402035 provision.go:143] copyHostCerts
	I0510 18:02:46.916005  402035 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem, removing ...
	I0510 18:02:46.916026  402035 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem
	I0510 18:02:46.916089  402035 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem (1078 bytes)
	I0510 18:02:46.916183  402035 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem, removing ...
	I0510 18:02:46.916187  402035 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem
	I0510 18:02:46.916210  402035 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem (1123 bytes)
	I0510 18:02:46.916258  402035 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem, removing ...
	I0510 18:02:46.916261  402035 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem
	I0510 18:02:46.916283  402035 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem (1675 bytes)
	I0510 18:02:46.916322  402035 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem org=jenkins.functional-581506 san=[127.0.0.1 192.168.39.52 functional-581506 localhost minikube]
	I0510 18:02:47.231951  402035 provision.go:177] copyRemoteCerts
	I0510 18:02:47.232007  402035 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0510 18:02:47.232032  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHHostname
	I0510 18:02:47.235562  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:47.235996  402035 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
	I0510 18:02:47.236028  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:47.236244  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHPort
	I0510 18:02:47.236501  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:47.236684  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHUsername
	I0510 18:02:47.236859  402035 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/functional-581506/id_rsa Username:docker}
	I0510 18:02:47.328493  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0510 18:02:47.362929  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0510 18:02:47.402301  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0510 18:02:47.436276  402035 provision.go:87] duration metric: took 527.405123ms to configureAuth
	I0510 18:02:47.436303  402035 buildroot.go:189] setting minikube options for container-runtime
	I0510 18:02:47.436596  402035 config.go:182] Loaded profile config "functional-581506": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 18:02:47.436690  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHHostname
	I0510 18:02:47.440022  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:47.440415  402035 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
	I0510 18:02:47.440441  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:47.440681  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHPort
	I0510 18:02:47.440965  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:47.441340  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:47.441565  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHUsername
	I0510 18:02:47.441774  402035 main.go:141] libmachine: Using SSH client type: native
	I0510 18:02:47.442138  402035 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I0510 18:02:47.442150  402035 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0510 18:02:53.140976  402035 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0510 18:02:53.140992  402035 machine.go:96] duration metric: took 6.627069114s to provisionDockerMachine
	I0510 18:02:53.141003  402035 start.go:293] postStartSetup for "functional-581506" (driver="kvm2")
	I0510 18:02:53.141012  402035 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0510 18:02:53.141027  402035 main.go:141] libmachine: (functional-581506) Calling .DriverName
	I0510 18:02:53.141384  402035 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0510 18:02:53.141411  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHHostname
	I0510 18:02:53.144494  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:53.144834  402035 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
	I0510 18:02:53.144853  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:53.144999  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHPort
	I0510 18:02:53.145178  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:53.145322  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHUsername
	I0510 18:02:53.145457  402035 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/functional-581506/id_rsa Username:docker}
	I0510 18:02:53.240441  402035 ssh_runner.go:195] Run: cat /etc/os-release
	I0510 18:02:53.245712  402035 info.go:137] Remote host: Buildroot 2024.11.2
	I0510 18:02:53.245743  402035 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-388787/.minikube/addons for local assets ...
	I0510 18:02:53.245813  402035 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-388787/.minikube/files for local assets ...
	I0510 18:02:53.245880  402035 filesync.go:149] local asset: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem -> 3959802.pem in /etc/ssl/certs
	I0510 18:02:53.245953  402035 filesync.go:149] local asset: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/test/nested/copy/395980/hosts -> hosts in /etc/test/nested/copy/395980
	I0510 18:02:53.245988  402035 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/395980
	I0510 18:02:53.258624  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem --> /etc/ssl/certs/3959802.pem (1708 bytes)
	I0510 18:02:53.295954  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/test/nested/copy/395980/hosts --> /etc/test/nested/copy/395980/hosts (40 bytes)
	I0510 18:02:53.327666  402035 start.go:296] duration metric: took 186.648319ms for postStartSetup
	I0510 18:02:53.327715  402035 fix.go:56] duration metric: took 6.838760767s for fixHost
	I0510 18:02:53.327740  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHHostname
	I0510 18:02:53.330484  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:53.330859  402035 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
	I0510 18:02:53.330890  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:53.331009  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHPort
	I0510 18:02:53.331230  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:53.331412  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:53.331544  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHUsername
	I0510 18:02:53.331662  402035 main.go:141] libmachine: Using SSH client type: native
	I0510 18:02:53.331877  402035 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I0510 18:02:53.331882  402035 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0510 18:02:53.453061  402035 main.go:141] libmachine: SSH cmd err, output: <nil>: 1746900173.447130487
	
	I0510 18:02:53.453092  402035 fix.go:216] guest clock: 1746900173.447130487
	I0510 18:02:53.453099  402035 fix.go:229] Guest: 2025-05-10 18:02:53.447130487 +0000 UTC Remote: 2025-05-10 18:02:53.327719446 +0000 UTC m=+6.971359045 (delta=119.411041ms)
	I0510 18:02:53.453119  402035 fix.go:200] guest clock delta is within tolerance: 119.411041ms
	I0510 18:02:53.453123  402035 start.go:83] releasing machines lock for "functional-581506", held for 6.964180893s
	I0510 18:02:53.453145  402035 main.go:141] libmachine: (functional-581506) Calling .DriverName
	I0510 18:02:53.453448  402035 main.go:141] libmachine: (functional-581506) Calling .GetIP
	I0510 18:02:53.456220  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:53.456476  402035 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
	I0510 18:02:53.456494  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:53.456627  402035 main.go:141] libmachine: (functional-581506) Calling .DriverName
	I0510 18:02:53.457205  402035 main.go:141] libmachine: (functional-581506) Calling .DriverName
	I0510 18:02:53.457369  402035 main.go:141] libmachine: (functional-581506) Calling .DriverName
	I0510 18:02:53.457461  402035 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0510 18:02:53.457506  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHHostname
	I0510 18:02:53.457607  402035 ssh_runner.go:195] Run: cat /version.json
	I0510 18:02:53.457625  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHHostname
	I0510 18:02:53.460159  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:53.460383  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:53.460534  402035 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
	I0510 18:02:53.460568  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:53.460745  402035 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
	I0510 18:02:53.460761  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:53.460773  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHPort
	I0510 18:02:53.460958  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:53.460967  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHPort
	I0510 18:02:53.461130  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHUsername
	I0510 18:02:53.461146  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:53.461326  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHUsername
	I0510 18:02:53.461314  402035 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/functional-581506/id_rsa Username:docker}
	I0510 18:02:53.461447  402035 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/functional-581506/id_rsa Username:docker}
	I0510 18:02:53.559403  402035 ssh_runner.go:195] Run: systemctl --version
	I0510 18:02:53.582132  402035 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0510 18:02:53.770630  402035 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0510 18:02:53.783161  402035 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0510 18:02:53.783285  402035 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0510 18:02:53.798993  402035 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0510 18:02:53.799013  402035 start.go:495] detecting cgroup driver to use...
	I0510 18:02:53.799097  402035 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0510 18:02:53.823538  402035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0510 18:02:53.848708  402035 docker.go:225] disabling cri-docker service (if available) ...
	I0510 18:02:53.848771  402035 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0510 18:02:53.880475  402035 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0510 18:02:53.909205  402035 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0510 18:02:54.228229  402035 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0510 18:02:54.462507  402035 docker.go:241] disabling docker service ...
	I0510 18:02:54.462575  402035 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0510 18:02:54.497169  402035 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0510 18:02:54.516357  402035 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0510 18:02:54.753088  402035 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0510 18:02:54.940449  402035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0510 18:02:54.956825  402035 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0510 18:02:54.980731  402035 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0510 18:02:54.980784  402035 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 18:02:54.993371  402035 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0510 18:02:54.993440  402035 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 18:02:55.006052  402035 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 18:02:55.018197  402035 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 18:02:55.030433  402035 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0510 18:02:55.045006  402035 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 18:02:55.057444  402035 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 18:02:55.071727  402035 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 18:02:55.084200  402035 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0510 18:02:55.096230  402035 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0510 18:02:55.107855  402035 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 18:02:55.290042  402035 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0510 18:04:25.856147  402035 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.566058413s)
	I0510 18:04:25.856185  402035 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0510 18:04:25.856270  402035 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0510 18:04:25.863129  402035 start.go:563] Will wait 60s for crictl version
	I0510 18:04:25.863197  402035 ssh_runner.go:195] Run: which crictl
	I0510 18:04:25.868051  402035 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0510 18:04:25.911506  402035 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0510 18:04:25.911578  402035 ssh_runner.go:195] Run: crio --version
	I0510 18:04:25.945197  402035 ssh_runner.go:195] Run: crio --version
	I0510 18:04:25.980379  402035 out.go:177] * Preparing Kubernetes v1.33.0 on CRI-O 1.29.1 ...
	I0510 18:04:25.982219  402035 main.go:141] libmachine: (functional-581506) Calling .GetIP
	I0510 18:04:25.985326  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:04:25.985730  402035 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
	I0510 18:04:25.985751  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:04:25.985941  402035 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0510 18:04:25.993435  402035 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0510 18:04:25.995308  402035 kubeadm.go:875] updating cluster {Name:functional-581506 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.33.0 ClusterName:functional-581506 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8441 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0510 18:04:25.995446  402035 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
	I0510 18:04:25.995518  402035 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 18:04:26.045932  402035 crio.go:514] all images are preloaded for cri-o runtime.
	I0510 18:04:26.045946  402035 crio.go:433] Images already preloaded, skipping extraction
	I0510 18:04:26.046014  402035 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 18:04:26.085235  402035 crio.go:514] all images are preloaded for cri-o runtime.
	I0510 18:04:26.085254  402035 cache_images.go:84] Images are preloaded, skipping loading
	I0510 18:04:26.085265  402035 kubeadm.go:926] updating node { 192.168.39.52 8441 v1.33.0 crio true true} ...
	I0510 18:04:26.085431  402035 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-581506 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.52
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.0 ClusterName:functional-581506 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0510 18:04:26.085506  402035 ssh_runner.go:195] Run: crio config
	I0510 18:04:26.138253  402035 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0510 18:04:26.138281  402035 cni.go:84] Creating CNI manager for ""
	I0510 18:04:26.138297  402035 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0510 18:04:26.138305  402035 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0510 18:04:26.138331  402035 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.52 APIServerPort:8441 KubernetesVersion:v1.33.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-581506 NodeName:functional-581506 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.52"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.52 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts
:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0510 18:04:26.138459  402035 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.52
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-581506"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.52"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.52"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0510 18:04:26.138527  402035 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.0
	I0510 18:04:26.152410  402035 binaries.go:44] Found k8s binaries, skipping transfer
	I0510 18:04:26.152484  402035 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0510 18:04:26.164608  402035 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0510 18:04:26.187091  402035 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0510 18:04:26.208040  402035 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2144 bytes)
	I0510 18:04:26.231151  402035 ssh_runner.go:195] Run: grep 192.168.39.52	control-plane.minikube.internal$ /etc/hosts
	I0510 18:04:26.235726  402035 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 18:04:26.416698  402035 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 18:04:26.435417  402035 certs.go:68] Setting up /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506 for IP: 192.168.39.52
	I0510 18:04:26.435435  402035 certs.go:194] generating shared ca certs ...
	I0510 18:04:26.435455  402035 certs.go:226] acquiring lock for ca certs: {Name:mk8db74782205da4ac57ef815dd495cda255251a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 18:04:26.435657  402035 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20720-388787/.minikube/ca.key
	I0510 18:04:26.435715  402035 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.key
	I0510 18:04:26.435724  402035 certs.go:256] generating profile certs ...
	I0510 18:04:26.435807  402035 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.key
	I0510 18:04:26.435852  402035 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/apiserver.key.e77f3034
	I0510 18:04:26.435879  402035 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/proxy-client.key
	I0510 18:04:26.435998  402035 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980.pem (1338 bytes)
	W0510 18:04:26.436022  402035 certs.go:480] ignoring /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980_empty.pem, impossibly tiny 0 bytes
	I0510 18:04:26.436028  402035 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem (1679 bytes)
	I0510 18:04:26.436049  402035 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem (1078 bytes)
	I0510 18:04:26.436067  402035 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem (1123 bytes)
	I0510 18:04:26.436088  402035 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem (1675 bytes)
	I0510 18:04:26.436136  402035 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem (1708 bytes)
	I0510 18:04:26.436850  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0510 18:04:26.469054  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0510 18:04:26.499255  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0510 18:04:26.529739  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0510 18:04:26.561946  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0510 18:04:26.595162  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0510 18:04:26.627840  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0510 18:04:26.659449  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0510 18:04:26.693269  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980.pem --> /usr/share/ca-certificates/395980.pem (1338 bytes)
	I0510 18:04:26.724816  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem --> /usr/share/ca-certificates/3959802.pem (1708 bytes)
	I0510 18:04:26.754834  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0510 18:04:26.787011  402035 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0510 18:04:26.809227  402035 ssh_runner.go:195] Run: openssl version
	I0510 18:04:26.817671  402035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3959802.pem && ln -fs /usr/share/ca-certificates/3959802.pem /etc/ssl/certs/3959802.pem"
	I0510 18:04:26.831583  402035 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3959802.pem
	I0510 18:04:26.837165  402035 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 10 18:00 /usr/share/ca-certificates/3959802.pem
	I0510 18:04:26.837228  402035 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3959802.pem
	I0510 18:04:26.845401  402035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3959802.pem /etc/ssl/certs/3ec20f2e.0"
	I0510 18:04:26.857985  402035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0510 18:04:26.871727  402035 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0510 18:04:26.877551  402035 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 10 17:52 /usr/share/ca-certificates/minikubeCA.pem
	I0510 18:04:26.877655  402035 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0510 18:04:26.885597  402035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0510 18:04:26.897966  402035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/395980.pem && ln -fs /usr/share/ca-certificates/395980.pem /etc/ssl/certs/395980.pem"
	I0510 18:04:26.911449  402035 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/395980.pem
	I0510 18:04:26.917136  402035 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 10 18:00 /usr/share/ca-certificates/395980.pem
	I0510 18:04:26.917209  402035 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/395980.pem
	I0510 18:04:26.924808  402035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/395980.pem /etc/ssl/certs/51391683.0"
	I0510 18:04:26.957285  402035 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0510 18:04:26.969150  402035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0510 18:04:26.987736  402035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0510 18:04:27.006182  402035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0510 18:04:27.022469  402035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0510 18:04:27.031936  402035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0510 18:04:27.044701  402035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0510 18:04:27.061968  402035 kubeadm.go:392] StartCluster: {Name:functional-581506 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33
.0 ClusterName:functional-581506 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8441 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStri
ng:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 18:04:27.062052  402035 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0510 18:04:27.062122  402035 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0510 18:04:27.165135  402035 cri.go:89] found id: "67bef24b725ebf7a2b7f343d7456516d6b5de38118f9cf48e7d70d9146ce2087"
	I0510 18:04:27.165148  402035 cri.go:89] found id: "2b4eccacbeea6a58cc9c575f2c2bf5f8297029f9c9d2a9264bcf3e69644b4c28"
	I0510 18:04:27.165151  402035 cri.go:89] found id: "9ddf6914642a098d580c48db641460c4197df74a06bf7008e362f610f185934d"
	I0510 18:04:27.165153  402035 cri.go:89] found id: "08d812eb972925640e90642a5458269dea94298436a73e78a578d0bfe369daaf"
	I0510 18:04:27.165155  402035 cri.go:89] found id: "74fd0b7de642965eb7e03cf324017cb2195034685758e46efbd5e6997aba9ae5"
	I0510 18:04:27.165157  402035 cri.go:89] found id: "5879bea6c3a25517766471c3eec758ce0c6d853db7055e1f3505263a674ed969"
	I0510 18:04:27.165158  402035 cri.go:89] found id: "bc42d63e6220a437de1d056d765ed97df2e6978798401b10283f61c7b1bc895b"
	I0510 18:04:27.165160  402035 cri.go:89] found id: ""
	I0510 18:04:27.165206  402035 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-581506 -n functional-581506
helpers_test.go:261: (dbg) Run:  kubectl --context functional-581506 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/ComponentHealth FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/ComponentHealth (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-581506 --alsologtostderr -v=1]
functional_test.go:935: output didn't produce a URL
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-581506 --alsologtostderr -v=1] ...
functional_test.go:927: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-581506 --alsologtostderr -v=1] stdout:
functional_test.go:927: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-581506 --alsologtostderr -v=1] stderr:
I0510 18:16:06.388199  407246 out.go:345] Setting OutFile to fd 1 ...
I0510 18:16:06.388316  407246 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0510 18:16:06.388325  407246 out.go:358] Setting ErrFile to fd 2...
I0510 18:16:06.388329  407246 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0510 18:16:06.388544  407246 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-388787/.minikube/bin
I0510 18:16:06.388758  407246 mustload.go:65] Loading cluster: functional-581506
I0510 18:16:06.389144  407246 config.go:182] Loaded profile config "functional-581506": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
I0510 18:16:06.389494  407246 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0510 18:16:06.389554  407246 main.go:141] libmachine: Launching plugin server for driver kvm2
I0510 18:16:06.405271  407246 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33915
I0510 18:16:06.405799  407246 main.go:141] libmachine: () Calling .GetVersion
I0510 18:16:06.406395  407246 main.go:141] libmachine: Using API Version  1
I0510 18:16:06.406430  407246 main.go:141] libmachine: () Calling .SetConfigRaw
I0510 18:16:06.406843  407246 main.go:141] libmachine: () Calling .GetMachineName
I0510 18:16:06.407064  407246 main.go:141] libmachine: (functional-581506) Calling .GetState
I0510 18:16:06.408788  407246 host.go:66] Checking if "functional-581506" exists ...
I0510 18:16:06.409231  407246 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0510 18:16:06.409281  407246 main.go:141] libmachine: Launching plugin server for driver kvm2
I0510 18:16:06.425836  407246 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37759
I0510 18:16:06.426403  407246 main.go:141] libmachine: () Calling .GetVersion
I0510 18:16:06.427020  407246 main.go:141] libmachine: Using API Version  1
I0510 18:16:06.427055  407246 main.go:141] libmachine: () Calling .SetConfigRaw
I0510 18:16:06.427493  407246 main.go:141] libmachine: () Calling .GetMachineName
I0510 18:16:06.427722  407246 main.go:141] libmachine: (functional-581506) Calling .DriverName
I0510 18:16:06.427904  407246 api_server.go:166] Checking apiserver status ...
I0510 18:16:06.427975  407246 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0510 18:16:06.428006  407246 main.go:141] libmachine: (functional-581506) Calling .GetSSHHostname
I0510 18:16:06.430807  407246 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
I0510 18:16:06.431202  407246 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
I0510 18:16:06.431225  407246 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
I0510 18:16:06.431422  407246 main.go:141] libmachine: (functional-581506) Calling .GetSSHPort
I0510 18:16:06.431604  407246 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
I0510 18:16:06.431817  407246 main.go:141] libmachine: (functional-581506) Calling .GetSSHUsername
I0510 18:16:06.432034  407246 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/functional-581506/id_rsa Username:docker}
I0510 18:16:06.531702  407246 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6879/cgroup
W0510 18:16:06.543756  407246 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/6879/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0510 18:16:06.543836  407246 ssh_runner.go:195] Run: ls
I0510 18:16:06.548853  407246 api_server.go:253] Checking apiserver healthz at https://192.168.39.52:8441/healthz ...
I0510 18:16:06.554418  407246 api_server.go:279] https://192.168.39.52:8441/healthz returned 200:
ok
W0510 18:16:06.554471  407246 out.go:270] * Enabling dashboard ...
* Enabling dashboard ...
I0510 18:16:06.554639  407246 config.go:182] Loaded profile config "functional-581506": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
I0510 18:16:06.554671  407246 addons.go:69] Setting dashboard=true in profile "functional-581506"
I0510 18:16:06.554685  407246 addons.go:238] Setting addon dashboard=true in "functional-581506"
I0510 18:16:06.554713  407246 host.go:66] Checking if "functional-581506" exists ...
I0510 18:16:06.554971  407246 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0510 18:16:06.555017  407246 main.go:141] libmachine: Launching plugin server for driver kvm2
I0510 18:16:06.570784  407246 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42933
I0510 18:16:06.571408  407246 main.go:141] libmachine: () Calling .GetVersion
I0510 18:16:06.572030  407246 main.go:141] libmachine: Using API Version  1
I0510 18:16:06.572061  407246 main.go:141] libmachine: () Calling .SetConfigRaw
I0510 18:16:06.572403  407246 main.go:141] libmachine: () Calling .GetMachineName
I0510 18:16:06.573033  407246 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0510 18:16:06.573081  407246 main.go:141] libmachine: Launching plugin server for driver kvm2
I0510 18:16:06.589706  407246 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41619
I0510 18:16:06.590207  407246 main.go:141] libmachine: () Calling .GetVersion
I0510 18:16:06.590677  407246 main.go:141] libmachine: Using API Version  1
I0510 18:16:06.590706  407246 main.go:141] libmachine: () Calling .SetConfigRaw
I0510 18:16:06.591174  407246 main.go:141] libmachine: () Calling .GetMachineName
I0510 18:16:06.591377  407246 main.go:141] libmachine: (functional-581506) Calling .GetState
I0510 18:16:06.593310  407246 main.go:141] libmachine: (functional-581506) Calling .DriverName
I0510 18:16:06.595395  407246 out.go:177]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0510 18:16:06.596554  407246 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0510 18:16:06.597601  407246 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0510 18:16:06.597618  407246 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0510 18:16:06.597638  407246 main.go:141] libmachine: (functional-581506) Calling .GetSSHHostname
I0510 18:16:06.600954  407246 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
I0510 18:16:06.601409  407246 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
I0510 18:16:06.601448  407246 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
I0510 18:16:06.601539  407246 main.go:141] libmachine: (functional-581506) Calling .GetSSHPort
I0510 18:16:06.601742  407246 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
I0510 18:16:06.601889  407246 main.go:141] libmachine: (functional-581506) Calling .GetSSHUsername
I0510 18:16:06.602014  407246 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/functional-581506/id_rsa Username:docker}
I0510 18:16:06.703439  407246 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0510 18:16:06.703478  407246 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0510 18:16:06.724705  407246 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0510 18:16:06.724751  407246 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0510 18:16:06.745767  407246 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0510 18:16:06.745797  407246 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0510 18:16:06.766880  407246 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0510 18:16:06.766906  407246 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0510 18:16:06.789341  407246 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0510 18:16:06.789368  407246 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0510 18:16:06.810655  407246 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0510 18:16:06.810691  407246 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0510 18:16:06.835830  407246 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0510 18:16:06.835866  407246 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0510 18:16:06.857816  407246 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0510 18:16:06.857849  407246 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0510 18:16:06.880018  407246 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0510 18:16:06.880053  407246 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0510 18:16:06.907043  407246 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0510 18:16:07.812257  407246 main.go:141] libmachine: Making call to close driver server
I0510 18:16:07.812293  407246 main.go:141] libmachine: (functional-581506) Calling .Close
I0510 18:16:07.812633  407246 main.go:141] libmachine: Successfully made call to close driver server
I0510 18:16:07.812654  407246 main.go:141] libmachine: Making call to close connection to plugin binary
I0510 18:16:07.812665  407246 main.go:141] libmachine: Making call to close driver server
I0510 18:16:07.812697  407246 main.go:141] libmachine: (functional-581506) Calling .Close
I0510 18:16:07.812973  407246 main.go:141] libmachine: Successfully made call to close driver server
I0510 18:16:07.813007  407246 main.go:141] libmachine: Making call to close connection to plugin binary
I0510 18:16:07.813002  407246 main.go:141] libmachine: (functional-581506) DBG | Closing plugin on server side
I0510 18:16:07.814693  407246 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-581506 addons enable metrics-server

                                                
                                                
I0510 18:16:07.816129  407246 addons.go:201] Writing out "functional-581506" config to set dashboard=true...
W0510 18:16:07.816372  407246 out.go:270] * Verifying dashboard health ...
* Verifying dashboard health ...
I0510 18:16:07.817008  407246 kapi.go:59] client config for functional-581506: &rest.Config{Host:"https://192.168.39.52:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.crt", KeyFile:"/home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.key", CAFile:"/home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24b3a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0510 18:16:07.817557  407246 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0510 18:16:07.817573  407246 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0510 18:16:07.817578  407246 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0510 18:16:07.817585  407246 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0510 18:16:07.829754  407246 service.go:214] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  d77498ce-7dfb-413b-b26c-73b90e688bac 1340 0 2025-05-10 18:16:07 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-05-10 18:16:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.107.114.28,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.107.114.28],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0510 18:16:07.829941  407246 out.go:270] * Launching proxy ...
* Launching proxy ...
I0510 18:16:07.830019  407246 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-581506 proxy --port 36195]
I0510 18:16:07.830275  407246 dashboard.go:157] Waiting for kubectl to output host:port ...
I0510 18:16:07.875969  407246 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W0510 18:16:07.876007  407246 out.go:270] * Verifying proxy health ...
* Verifying proxy health ...
I0510 18:16:07.885086  407246 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ee49fbbd-5446-4b39-a5de-d00c1c27ece6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:16:07 GMT]] Body:0xc00179a400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000214c80 TLS:<nil>}
I0510 18:16:07.885198  407246 retry.go:31] will retry after 119.572µs: Temporary Error: unexpected response code: 503
I0510 18:16:07.889019  407246 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a778d71b-2107-4cfe-bcd6-b68676338a2a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:16:07 GMT]] Body:0xc00048c500 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017be500 TLS:<nil>}
I0510 18:16:07.889077  407246 retry.go:31] will retry after 143.753µs: Temporary Error: unexpected response code: 503
I0510 18:16:07.893251  407246 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[46b5349c-6496-4d36-95cd-d24dd65e6ee6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:16:07 GMT]] Body:0xc000a60a40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000614f00 TLS:<nil>}
I0510 18:16:07.893320  407246 retry.go:31] will retry after 201.348µs: Temporary Error: unexpected response code: 503
I0510 18:16:07.897019  407246 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[555298bb-5339-45bf-9ed7-5ab3d5c15686] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:16:07 GMT]] Body:0xc00048c5c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000214dc0 TLS:<nil>}
I0510 18:16:07.897085  407246 retry.go:31] will retry after 487.138µs: Temporary Error: unexpected response code: 503
I0510 18:16:07.901308  407246 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9b5bc2f9-bb76-4f6d-bd1c-b83f05320152] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:16:07 GMT]] Body:0xc00179a580 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000615040 TLS:<nil>}
I0510 18:16:07.901388  407246 retry.go:31] will retry after 652.472µs: Temporary Error: unexpected response code: 503
I0510 18:16:07.905040  407246 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e725fc5c-bd59-4e83-8c71-70cb2b7ed227] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:16:07 GMT]] Body:0xc00048c680 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017be640 TLS:<nil>}
I0510 18:16:07.905109  407246 retry.go:31] will retry after 900.66µs: Temporary Error: unexpected response code: 503
I0510 18:16:07.908601  407246 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2d32ffa3-e26d-4350-9b0d-ffe177cde8a6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:16:07 GMT]] Body:0xc000a60c80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000615540 TLS:<nil>}
I0510 18:16:07.908674  407246 retry.go:31] will retry after 910.05µs: Temporary Error: unexpected response code: 503
I0510 18:16:07.912182  407246 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[225bfdc7-e6ad-4431-b1e9-f05af0f0c81d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:16:07 GMT]] Body:0xc00048c740 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000214f00 TLS:<nil>}
I0510 18:16:07.912242  407246 retry.go:31] will retry after 1.857683ms: Temporary Error: unexpected response code: 503
I0510 18:16:07.916769  407246 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[09288517-e655-4513-af55-fa2a46965ac3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:16:07 GMT]] Body:0xc00048c800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000615680 TLS:<nil>}
I0510 18:16:07.916836  407246 retry.go:31] will retry after 2.260832ms: Temporary Error: unexpected response code: 503
I0510 18:16:07.922490  407246 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e98c4863-a7ab-4134-b2b4-de0d9f9bb6bb] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:16:07 GMT]] Body:0xc00179a740 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0006157c0 TLS:<nil>}
I0510 18:16:07.922557  407246 retry.go:31] will retry after 2.249718ms: Temporary Error: unexpected response code: 503
I0510 18:16:07.927971  407246 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[473c308c-352a-4345-9d7f-6cb5b12da6db] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:16:07 GMT]] Body:0xc00048c8c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017be780 TLS:<nil>}
I0510 18:16:07.928052  407246 retry.go:31] will retry after 3.041675ms: Temporary Error: unexpected response code: 503
I0510 18:16:07.933568  407246 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[218d9434-1e0b-49d9-8b58-8beb9d47782a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:16:07 GMT]] Body:0xc000a60d80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0015c4000 TLS:<nil>}
I0510 18:16:07.933634  407246 retry.go:31] will retry after 6.617138ms: Temporary Error: unexpected response code: 503
I0510 18:16:07.943664  407246 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[561b21f1-542e-4923-9221-1a08859c33fa] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:16:07 GMT]] Body:0xc00048c9c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000215180 TLS:<nil>}
I0510 18:16:07.943728  407246 retry.go:31] will retry after 9.326309ms: Temporary Error: unexpected response code: 503
I0510 18:16:07.956875  407246 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[879df7bb-45cf-4415-a161-fe7164725ac5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:16:07 GMT]] Body:0xc00048ca80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0015c4140 TLS:<nil>}
I0510 18:16:07.956936  407246 retry.go:31] will retry after 26.776682ms: Temporary Error: unexpected response code: 503
I0510 18:16:07.987294  407246 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3813c5e6-0477-4639-906a-5a00a348ba00] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:16:07 GMT]] Body:0xc00179a880 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0015c4280 TLS:<nil>}
I0510 18:16:07.987372  407246 retry.go:31] will retry after 36.516891ms: Temporary Error: unexpected response code: 503
I0510 18:16:08.027678  407246 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5089651d-9317-453f-a197-4cc6d788de6c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:16:08 GMT]] Body:0xc00048cb40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017be8c0 TLS:<nil>}
I0510 18:16:08.027744  407246 retry.go:31] will retry after 63.614214ms: Temporary Error: unexpected response code: 503
I0510 18:16:08.095072  407246 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[48d3c47f-2538-4ef0-86d1-c4116f6412c9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:16:08 GMT]] Body:0xc00179a980 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0015c43c0 TLS:<nil>}
I0510 18:16:08.095145  407246 retry.go:31] will retry after 92.911295ms: Temporary Error: unexpected response code: 503
I0510 18:16:08.191334  407246 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6f86d8f1-25bd-4dbc-bd3f-f576e71cc3b7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:16:08 GMT]] Body:0xc000a60ec0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017bea00 TLS:<nil>}
I0510 18:16:08.191408  407246 retry.go:31] will retry after 134.892204ms: Temporary Error: unexpected response code: 503
I0510 18:16:08.329787  407246 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5adad15e-aa8e-47a3-bdcb-6ed268a43c20] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:16:08 GMT]] Body:0xc00048cf80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000215680 TLS:<nil>}
I0510 18:16:08.329868  407246 retry.go:31] will retry after 106.782839ms: Temporary Error: unexpected response code: 503
I0510 18:16:08.440104  407246 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8c290a4d-cb06-4d40-816b-ba92aef53a85] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:16:08 GMT]] Body:0xc00179aac0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0015c4500 TLS:<nil>}
I0510 18:16:08.440179  407246 retry.go:31] will retry after 317.66143ms: Temporary Error: unexpected response code: 503
I0510 18:16:08.763217  407246 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b03ffb07-ca7c-4293-a0fc-f27786339785] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:16:08 GMT]] Body:0xc00048d140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017beb40 TLS:<nil>}
I0510 18:16:08.763311  407246 retry.go:31] will retry after 420.173863ms: Temporary Error: unexpected response code: 503
I0510 18:16:09.187028  407246 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[dcaf9234-1996-4d73-b876-bd2465707a25] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:16:09 GMT]] Body:0xc000a60f80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0015c4640 TLS:<nil>}
I0510 18:16:09.187105  407246 retry.go:31] will retry after 307.538064ms: Temporary Error: unexpected response code: 503
I0510 18:16:09.498267  407246 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8902cb14-39c8-4f3e-af7d-b1561b000e54] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:16:09 GMT]] Body:0xc000a610c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002157c0 TLS:<nil>}
I0510 18:16:09.498330  407246 retry.go:31] will retry after 708.843812ms: Temporary Error: unexpected response code: 503
I0510 18:16:10.211417  407246 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2060c2ff-3b4c-4809-9872-209b875804ff] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:16:10 GMT]] Body:0xc00048d280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000215900 TLS:<nil>}
I0510 18:16:10.211498  407246 retry.go:31] will retry after 696.515001ms: Temporary Error: unexpected response code: 503
I0510 18:16:10.911513  407246 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b74c4bf2-c4c7-4684-afc4-c45b9c92eea6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:16:10 GMT]] Body:0xc00179ac00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0015c4780 TLS:<nil>}
I0510 18:16:10.911593  407246 retry.go:31] will retry after 1.377570992s: Temporary Error: unexpected response code: 503
I0510 18:16:12.292674  407246 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f7669343-55f3-42d0-a6e9-59cf20c58696] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:16:12 GMT]] Body:0xc000a61240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0015c48c0 TLS:<nil>}
I0510 18:16:12.292759  407246 retry.go:31] will retry after 3.096419009s: Temporary Error: unexpected response code: 503
I0510 18:16:15.394353  407246 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7a2d137d-7f50-41b1-977b-dc60d3243fa9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:16:15 GMT]] Body:0xc00048d400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000215a40 TLS:<nil>}
I0510 18:16:15.394428  407246 retry.go:31] will retry after 3.172360514s: Temporary Error: unexpected response code: 503
I0510 18:16:18.570732  407246 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[79d3b360-a6b9-438c-b71e-4951213d2cee] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:16:18 GMT]] Body:0xc000a61380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0015c4a00 TLS:<nil>}
I0510 18:16:18.570825  407246 retry.go:31] will retry after 7.590008227s: Temporary Error: unexpected response code: 503
I0510 18:16:26.164302  407246 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[29717f19-d274-4a60-93a0-aff0cc550e04] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:16:26 GMT]] Body:0xc000a615c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000215b80 TLS:<nil>}
I0510 18:16:26.164379  407246 retry.go:31] will retry after 7.326128959s: Temporary Error: unexpected response code: 503
I0510 18:16:33.494176  407246 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[90663412-5fa9-400a-bb00-34b4d7e0e236] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:16:33 GMT]] Body:0xc000a616c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000215cc0 TLS:<nil>}
I0510 18:16:33.494241  407246 retry.go:31] will retry after 18.004424368s: Temporary Error: unexpected response code: 503
I0510 18:16:51.502428  407246 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5602ca0a-7ea6-43c0-a33a-6f9296461ef1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:16:51 GMT]] Body:0xc00179acc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000215e00 TLS:<nil>}
I0510 18:16:51.502497  407246 retry.go:31] will retry after 24.83351644s: Temporary Error: unexpected response code: 503
I0510 18:17:16.344190  407246 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[94470676-6c4d-4f14-9433-34e7bfc7d310] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:17:16 GMT]] Body:0xc00179ad80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017bec80 TLS:<nil>}
I0510 18:17:16.344287  407246 retry.go:31] will retry after 32.216560878s: Temporary Error: unexpected response code: 503
I0510 18:17:48.564144  407246 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[927ad47d-a92d-4cf5-b82b-f685c9a99a46] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:17:48 GMT]] Body:0xc00048d540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0018a0000 TLS:<nil>}
I0510 18:17:48.564216  407246 retry.go:31] will retry after 23.866605416s: Temporary Error: unexpected response code: 503
I0510 18:18:12.435201  407246 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[10f7dc9e-eefc-43d6-8e5e-df2262f3d58e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:18:12 GMT]] Body:0xc0009cc040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0015c4b40 TLS:<nil>}
I0510 18:18:12.435311  407246 retry.go:31] will retry after 31.281119033s: Temporary Error: unexpected response code: 503
I0510 18:18:43.721353  407246 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b44525dd-7f25-485f-8b6b-eeedba7bb7a0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:18:43 GMT]] Body:0xc000a601c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0015c4c80 TLS:<nil>}
I0510 18:18:43.721424  407246 retry.go:31] will retry after 1m1.183911838s: Temporary Error: unexpected response code: 503
I0510 18:19:44.910620  407246 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6a39a7c6-61e3-4989-9179-9425e60ca1fe] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:19:44 GMT]] Body:0xc000a60240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0015c4dc0 TLS:<nil>}
I0510 18:19:44.910723  407246 retry.go:31] will retry after 37.99628572s: Temporary Error: unexpected response code: 503
I0510 18:20:22.910999  407246 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c25460ab-853f-47d8-9396-939292ae8c22] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:20:22 GMT]] Body:0xc00048c040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0015c4f00 TLS:<nil>}
I0510 18:20:22.911100  407246 retry.go:31] will retry after 56.52624434s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-581506 -n functional-581506
helpers_test.go:244: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-581506 logs -n 25: (1.519809433s)
helpers_test.go:252: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	|----------------|------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-581506                                                   | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3783465736/001:/mount3 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                   |         |         |                     |                     |
	| mount          | -p functional-581506                                                   | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3783465736/001:/mount2 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                   |         |         |                     |                     |
	| ssh            | functional-581506 ssh findmnt                                          | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC | 10 May 25 18:16 UTC |
	|                | -T /mount1                                                             |                   |         |         |                     |                     |
	| ssh            | functional-581506 ssh findmnt                                          | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC | 10 May 25 18:16 UTC |
	|                | -T /mount2                                                             |                   |         |         |                     |                     |
	| ssh            | functional-581506 ssh findmnt                                          | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC | 10 May 25 18:16 UTC |
	|                | -T /mount3                                                             |                   |         |         |                     |                     |
	| mount          | -p functional-581506                                                   | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC |                     |
	|                | --kill=true                                                            |                   |         |         |                     |                     |
	| start          | -p functional-581506                                                   | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC |                     |
	|                | --dry-run --memory                                                     |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                                                |                   |         |         |                     |                     |
	|                | --driver=kvm2                                                          |                   |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                   |         |         |                     |                     |
	| start          | -p functional-581506                                                   | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC |                     |
	|                | --dry-run --alsologtostderr                                            |                   |         |         |                     |                     |
	|                | -v=1 --driver=kvm2                                                     |                   |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                   |         |         |                     |                     |
	| start          | -p functional-581506                                                   | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC |                     |
	|                | --dry-run --memory                                                     |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                                                |                   |         |         |                     |                     |
	|                | --driver=kvm2                                                          |                   |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                                                     | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC |                     |
	|                | -p functional-581506                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                   |         |         |                     |                     |
	| service        | functional-581506 service list                                         | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:18 UTC | 10 May 25 18:18 UTC |
	| service        | functional-581506 service list                                         | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:18 UTC | 10 May 25 18:18 UTC |
	|                | -o json                                                                |                   |         |         |                     |                     |
	| update-context | functional-581506                                                      | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:18 UTC | 10 May 25 18:18 UTC |
	|                | update-context                                                         |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                   |         |         |                     |                     |
	| update-context | functional-581506                                                      | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:18 UTC | 10 May 25 18:18 UTC |
	|                | update-context                                                         |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                   |         |         |                     |                     |
	| update-context | functional-581506                                                      | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:18 UTC | 10 May 25 18:18 UTC |
	|                | update-context                                                         |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                   |         |         |                     |                     |
	| service        | functional-581506 service                                              | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:18 UTC |                     |
	|                | --namespace=default --https                                            |                   |         |         |                     |                     |
	|                | --url hello-node                                                       |                   |         |         |                     |                     |
	| image          | functional-581506                                                      | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:18 UTC | 10 May 25 18:18 UTC |
	|                | image ls --format short                                                |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| image          | functional-581506                                                      | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:18 UTC | 10 May 25 18:18 UTC |
	|                | image ls --format yaml                                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| service        | functional-581506                                                      | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:18 UTC |                     |
	|                | service hello-node --url                                               |                   |         |         |                     |                     |
	|                | --format={{.IP}}                                                       |                   |         |         |                     |                     |
	| ssh            | functional-581506 ssh pgrep                                            | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:18 UTC |                     |
	|                | buildkitd                                                              |                   |         |         |                     |                     |
	| service        | functional-581506 service                                              | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:18 UTC |                     |
	|                | hello-node --url                                                       |                   |         |         |                     |                     |
	| image          | functional-581506 image build -t                                       | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:18 UTC | 10 May 25 18:18 UTC |
	|                | localhost/my-image:functional-581506                                   |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                   |         |         |                     |                     |
	| image          | functional-581506 image ls                                             | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:18 UTC | 10 May 25 18:18 UTC |
	| image          | functional-581506                                                      | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:18 UTC | 10 May 25 18:18 UTC |
	|                | image ls --format json                                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| image          | functional-581506                                                      | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:18 UTC | 10 May 25 18:18 UTC |
	|                | image ls --format table                                                |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/05/10 18:16:06
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0510 18:16:06.249127  407218 out.go:345] Setting OutFile to fd 1 ...
	I0510 18:16:06.249230  407218 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 18:16:06.249245  407218 out.go:358] Setting ErrFile to fd 2...
	I0510 18:16:06.249249  407218 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 18:16:06.249538  407218 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-388787/.minikube/bin
	I0510 18:16:06.250056  407218 out.go:352] Setting JSON to false
	I0510 18:16:06.250986  407218 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":28714,"bootTime":1746872252,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 18:16:06.251048  407218 start.go:140] virtualization: kvm guest
	I0510 18:16:06.252905  407218 out.go:177] * [functional-581506] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0510 18:16:06.254379  407218 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 18:16:06.254378  407218 notify.go:220] Checking for updates...
	I0510 18:16:06.255877  407218 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 18:16:06.257250  407218 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-388787/kubeconfig
	I0510 18:16:06.258440  407218 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-388787/.minikube
	I0510 18:16:06.259843  407218 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 18:16:06.261024  407218 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 18:16:06.262455  407218 config.go:182] Loaded profile config "functional-581506": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 18:16:06.262923  407218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 18:16:06.263004  407218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:16:06.279063  407218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38601
	I0510 18:16:06.279680  407218 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:16:06.280465  407218 main.go:141] libmachine: Using API Version  1
	I0510 18:16:06.280504  407218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:16:06.280895  407218 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:16:06.281110  407218 main.go:141] libmachine: (functional-581506) Calling .DriverName
	I0510 18:16:06.281407  407218 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 18:16:06.281717  407218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 18:16:06.281756  407218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:16:06.297201  407218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44999
	I0510 18:16:06.297734  407218 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:16:06.298367  407218 main.go:141] libmachine: Using API Version  1
	I0510 18:16:06.298396  407218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:16:06.298758  407218 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:16:06.298967  407218 main.go:141] libmachine: (functional-581506) Calling .DriverName
	I0510 18:16:06.333465  407218 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0510 18:16:06.334614  407218 start.go:304] selected driver: kvm2
	I0510 18:16:06.334628  407218 start.go:908] validating driver "kvm2" against &{Name:functional-581506 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.33.0 ClusterName:functional-581506 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8441 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 18:16:06.334724  407218 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 18:16:06.336620  407218 out.go:201] 
	W0510 18:16:06.337727  407218 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0510 18:16:06.338871  407218 out.go:201] 
	
	
	==> CRI-O <==
	May 10 18:21:07 functional-581506 crio[5891]: time="2025-05-10 18:21:07.230611009Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746901267230591340,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:190727,},InodesUsed:&UInt64Value{Value:98,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=450e2ee7-6cb6-4941-b27f-3dad341a2b1a name=/runtime.v1.ImageService/ImageFsInfo
	May 10 18:21:07 functional-581506 crio[5891]: time="2025-05-10 18:21:07.231304227Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=426e91e2-ef3f-4cd2-8d90-634c08882dbb name=/runtime.v1.RuntimeService/ListContainers
	May 10 18:21:07 functional-581506 crio[5891]: time="2025-05-10 18:21:07.231367004Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=426e91e2-ef3f-4cd2-8d90-634c08882dbb name=/runtime.v1.RuntimeService/ListContainers
	May 10 18:21:07 functional-581506 crio[5891]: time="2025-05-10 18:21:07.231595467Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bc59d73bfa2bf2b1a39e797a7d2b573e644354a5079881f3dd26cec1c252aba,PodSandboxId:ac1fe88b05f85fd3070ec6db14c318d8b19cd922062770aa6fc6b88cf2bc0f14,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1746900274095849002,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-t4rcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1c5c10-5db3-43e0-935a-0549799273f3,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca40d958630336ad5282e3e644a344eb6222b09f601d44b816dbc17429e58924,PodSandboxId:0c91495cd04f27933de8b107c48b7ad6314a49c58ac7a22c6acb1832e85de258,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,State:CONTAINER_RUNNING,CreatedAt:1746900270663661287,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-581506,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 65ecf0b12922dcc8259f7d51baab7e18,},Annotations:map[string]string{io.kubernetes.container.hash: 2e2dc675,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206d421221f482411c4e5a5ef3f7102eccd8b38f07c242446855962f9958f985,PodSandboxId:a71305dd0a11cb4fec07b8ecece405b394e529aed22658f28110cc632eb39534,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1746900267571783633,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: ea7d9372-7c9e-444b-a628-0dfc4003f07d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bad506e8de60f9ec83d122523ca19234a72175234ffd3433d02684eb651ce9d,PodSandboxId:ad6bf190d55676203ab65df23981cd676ca08ed2bc2eef1dd05517d694c7e66e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_RUNNING,CreatedAt:1746900267584333315,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-581506,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: b2dc81ade1bbda73868f61223889f8f4,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1002d7979feaa7a0860a8934e8992ae4fdc369b64f2a34d3a93bf01f4e8015e3,PodSandboxId:1713a07d44b66f7d807e2bd691e25e7ecdd6e7c5d84c1261729e464047a1a031,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1746900267652578505,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-581506,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe62e874
0903c7a0badf385e7524512e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c908b9ef4e2dd4afa4d8c8077af1366569126a578de927d95d14f07813040bab,PodSandboxId:da31c3a5af7bf008afa7c113669c143c8daf56d21cd077d4cf6dc85664b412de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_RUNNING,CreatedAt:1746900267384004977,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sxk9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f4ab1a-93f7-4c1e-bcbe-5f9c9daaae46,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4eccacbeea6a58cc9c575f2c2bf5f8297029f9c9d2a9264bcf3e69644b4c28,PodSandboxId:e49ea2b58308c6c0b9b2908ae1ab6a5818f361d3a75849eac0ab8eb63fab41ca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_EXITED,CreatedAt:1746900143518984605,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sxk9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f4ab1a-93f7-4c1e-bcbe-5f9c9daaae46,},Annotations:map[string]string{io.kubernet
es.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ddf6914642a098d580c48db641460c4197df74a06bf7008e362f610f185934d,PodSandboxId:00c4138d2ab0d3a6880991ae6ca2f7c7e3c2de33b60a469043a91f7f8adef12d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1746900143498396555,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7d9372-7c9e-444b-a628-0dfc4003f07d,},Annotations:map[string]string{io.kubernetes.container.
hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67bef24b725ebf7a2b7f343d7456516d6b5de38118f9cf48e7d70d9146ce2087,PodSandboxId:e30af250008246b61b90a3718d1c328f2984559c29b8526e0386129454a98b4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1746900143526731809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-t4rcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1c5c10-5db3-43e0-935a-0549799273f3,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.con
tainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5879bea6c3a25517766471c3eec758ce0c6d853db7055e1f3505263a674ed969,PodSandboxId:2cc3ee9d3458fbdf619a3c176b445eff63eefe6d42ab071484b6ca448013de07,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1746900136904565424,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-581506,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe62e8740903c7a0badf385e7524512e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74fd0b7de642965eb7e03cf324017cb2195034685758e46efbd5e6997aba9ae5,PodSandboxId:45aa7f96fbe49dd74e9cdfcc97884ce5caba88b39b6e9b00f2357661ecbba1a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_EXITED,CreatedAt:1746900136908093042,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functiona
l-581506,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2dc81ade1bbda73868f61223889f8f4,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc42d63e6220a437de1d056d765ed97df2e6978798401b10283f61c7b1bc895b,PodSandboxId:6ed00def2c968d5a51634c7dafc6e6cc749b20e361a2365659842d41ca79ff9c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,State:CONTAINER_EXITED,CreatedAt:1746900136856417357,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-581506,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a886f34999ac0d6b56a638cab77f640,},Annotations:map[string]string{io.kubernetes.container.hash: fd54b99d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=426e91e2-ef3f-4cd2-8d90-634c08882dbb name=/runtime.v1.RuntimeService/ListContainers
	May 10 18:21:07 functional-581506 crio[5891]: time="2025-05-10 18:21:07.281434583Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=90d687f3-514e-4ecd-8d1f-1f56d161c396 name=/runtime.v1.RuntimeService/Version
	May 10 18:21:07 functional-581506 crio[5891]: time="2025-05-10 18:21:07.281510006Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=90d687f3-514e-4ecd-8d1f-1f56d161c396 name=/runtime.v1.RuntimeService/Version
	May 10 18:21:07 functional-581506 crio[5891]: time="2025-05-10 18:21:07.282639762Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ebe35b6e-edce-4735-a4d3-38fe58fbe4c3 name=/runtime.v1.ImageService/ImageFsInfo
	May 10 18:21:07 functional-581506 crio[5891]: time="2025-05-10 18:21:07.283300672Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746901267283277676,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:190727,},InodesUsed:&UInt64Value{Value:98,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ebe35b6e-edce-4735-a4d3-38fe58fbe4c3 name=/runtime.v1.ImageService/ImageFsInfo
	May 10 18:21:07 functional-581506 crio[5891]: time="2025-05-10 18:21:07.283940009Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=888f7478-4e7b-454f-a3a7-82fb07498811 name=/runtime.v1.RuntimeService/ListContainers
	May 10 18:21:07 functional-581506 crio[5891]: time="2025-05-10 18:21:07.283995606Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=888f7478-4e7b-454f-a3a7-82fb07498811 name=/runtime.v1.RuntimeService/ListContainers
	May 10 18:21:07 functional-581506 crio[5891]: time="2025-05-10 18:21:07.284414074Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bc59d73bfa2bf2b1a39e797a7d2b573e644354a5079881f3dd26cec1c252aba,PodSandboxId:ac1fe88b05f85fd3070ec6db14c318d8b19cd922062770aa6fc6b88cf2bc0f14,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1746900274095849002,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-t4rcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1c5c10-5db3-43e0-935a-0549799273f3,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca40d958630336ad5282e3e644a344eb6222b09f601d44b816dbc17429e58924,PodSandboxId:0c91495cd04f27933de8b107c48b7ad6314a49c58ac7a22c6acb1832e85de258,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,State:CONTAINER_RUNNING,CreatedAt:1746900270663661287,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-581506,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 65ecf0b12922dcc8259f7d51baab7e18,},Annotations:map[string]string{io.kubernetes.container.hash: 2e2dc675,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206d421221f482411c4e5a5ef3f7102eccd8b38f07c242446855962f9958f985,PodSandboxId:a71305dd0a11cb4fec07b8ecece405b394e529aed22658f28110cc632eb39534,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1746900267571783633,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: ea7d9372-7c9e-444b-a628-0dfc4003f07d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bad506e8de60f9ec83d122523ca19234a72175234ffd3433d02684eb651ce9d,PodSandboxId:ad6bf190d55676203ab65df23981cd676ca08ed2bc2eef1dd05517d694c7e66e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_RUNNING,CreatedAt:1746900267584333315,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-581506,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: b2dc81ade1bbda73868f61223889f8f4,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1002d7979feaa7a0860a8934e8992ae4fdc369b64f2a34d3a93bf01f4e8015e3,PodSandboxId:1713a07d44b66f7d807e2bd691e25e7ecdd6e7c5d84c1261729e464047a1a031,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1746900267652578505,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-581506,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe62e874
0903c7a0badf385e7524512e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c908b9ef4e2dd4afa4d8c8077af1366569126a578de927d95d14f07813040bab,PodSandboxId:da31c3a5af7bf008afa7c113669c143c8daf56d21cd077d4cf6dc85664b412de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_RUNNING,CreatedAt:1746900267384004977,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sxk9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f4ab1a-93f7-4c1e-bcbe-5f9c9daaae46,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4eccacbeea6a58cc9c575f2c2bf5f8297029f9c9d2a9264bcf3e69644b4c28,PodSandboxId:e49ea2b58308c6c0b9b2908ae1ab6a5818f361d3a75849eac0ab8eb63fab41ca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_EXITED,CreatedAt:1746900143518984605,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sxk9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f4ab1a-93f7-4c1e-bcbe-5f9c9daaae46,},Annotations:map[string]string{io.kubernet
es.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ddf6914642a098d580c48db641460c4197df74a06bf7008e362f610f185934d,PodSandboxId:00c4138d2ab0d3a6880991ae6ca2f7c7e3c2de33b60a469043a91f7f8adef12d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1746900143498396555,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7d9372-7c9e-444b-a628-0dfc4003f07d,},Annotations:map[string]string{io.kubernetes.container.
hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67bef24b725ebf7a2b7f343d7456516d6b5de38118f9cf48e7d70d9146ce2087,PodSandboxId:e30af250008246b61b90a3718d1c328f2984559c29b8526e0386129454a98b4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1746900143526731809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-t4rcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1c5c10-5db3-43e0-935a-0549799273f3,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.con
tainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5879bea6c3a25517766471c3eec758ce0c6d853db7055e1f3505263a674ed969,PodSandboxId:2cc3ee9d3458fbdf619a3c176b445eff63eefe6d42ab071484b6ca448013de07,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1746900136904565424,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-581506,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe62e8740903c7a0badf385e7524512e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74fd0b7de642965eb7e03cf324017cb2195034685758e46efbd5e6997aba9ae5,PodSandboxId:45aa7f96fbe49dd74e9cdfcc97884ce5caba88b39b6e9b00f2357661ecbba1a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_EXITED,CreatedAt:1746900136908093042,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functiona
l-581506,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2dc81ade1bbda73868f61223889f8f4,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc42d63e6220a437de1d056d765ed97df2e6978798401b10283f61c7b1bc895b,PodSandboxId:6ed00def2c968d5a51634c7dafc6e6cc749b20e361a2365659842d41ca79ff9c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,State:CONTAINER_EXITED,CreatedAt:1746900136856417357,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-581506,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a886f34999ac0d6b56a638cab77f640,},Annotations:map[string]string{io.kubernetes.container.hash: fd54b99d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=888f7478-4e7b-454f-a3a7-82fb07498811 name=/runtime.v1.RuntimeService/ListContainers
	May 10 18:21:07 functional-581506 crio[5891]: time="2025-05-10 18:21:07.325755268Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=004eacb6-297c-4212-93a9-83b746f96e7c name=/runtime.v1.RuntimeService/Version
	May 10 18:21:07 functional-581506 crio[5891]: time="2025-05-10 18:21:07.325829960Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=004eacb6-297c-4212-93a9-83b746f96e7c name=/runtime.v1.RuntimeService/Version
	May 10 18:21:07 functional-581506 crio[5891]: time="2025-05-10 18:21:07.327218963Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=72a0d40f-bd53-4ed6-8df0-018c0fddda0c name=/runtime.v1.ImageService/ImageFsInfo
	May 10 18:21:07 functional-581506 crio[5891]: time="2025-05-10 18:21:07.327799586Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746901267327778229,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:190727,},InodesUsed:&UInt64Value{Value:98,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=72a0d40f-bd53-4ed6-8df0-018c0fddda0c name=/runtime.v1.ImageService/ImageFsInfo
	May 10 18:21:07 functional-581506 crio[5891]: time="2025-05-10 18:21:07.328356587Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8aa8f6d3-1169-4abc-a6a6-ef4e16d4ac70 name=/runtime.v1.RuntimeService/ListContainers
	May 10 18:21:07 functional-581506 crio[5891]: time="2025-05-10 18:21:07.328425140Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8aa8f6d3-1169-4abc-a6a6-ef4e16d4ac70 name=/runtime.v1.RuntimeService/ListContainers
	May 10 18:21:07 functional-581506 crio[5891]: time="2025-05-10 18:21:07.328758706Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bc59d73bfa2bf2b1a39e797a7d2b573e644354a5079881f3dd26cec1c252aba,PodSandboxId:ac1fe88b05f85fd3070ec6db14c318d8b19cd922062770aa6fc6b88cf2bc0f14,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1746900274095849002,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-t4rcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1c5c10-5db3-43e0-935a-0549799273f3,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca40d958630336ad5282e3e644a344eb6222b09f601d44b816dbc17429e58924,PodSandboxId:0c91495cd04f27933de8b107c48b7ad6314a49c58ac7a22c6acb1832e85de258,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,State:CONTAINER_RUNNING,CreatedAt:1746900270663661287,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-581506,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 65ecf0b12922dcc8259f7d51baab7e18,},Annotations:map[string]string{io.kubernetes.container.hash: 2e2dc675,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206d421221f482411c4e5a5ef3f7102eccd8b38f07c242446855962f9958f985,PodSandboxId:a71305dd0a11cb4fec07b8ecece405b394e529aed22658f28110cc632eb39534,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1746900267571783633,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: ea7d9372-7c9e-444b-a628-0dfc4003f07d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bad506e8de60f9ec83d122523ca19234a72175234ffd3433d02684eb651ce9d,PodSandboxId:ad6bf190d55676203ab65df23981cd676ca08ed2bc2eef1dd05517d694c7e66e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_RUNNING,CreatedAt:1746900267584333315,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-581506,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: b2dc81ade1bbda73868f61223889f8f4,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1002d7979feaa7a0860a8934e8992ae4fdc369b64f2a34d3a93bf01f4e8015e3,PodSandboxId:1713a07d44b66f7d807e2bd691e25e7ecdd6e7c5d84c1261729e464047a1a031,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1746900267652578505,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-581506,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe62e874
0903c7a0badf385e7524512e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c908b9ef4e2dd4afa4d8c8077af1366569126a578de927d95d14f07813040bab,PodSandboxId:da31c3a5af7bf008afa7c113669c143c8daf56d21cd077d4cf6dc85664b412de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_RUNNING,CreatedAt:1746900267384004977,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sxk9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f4ab1a-93f7-4c1e-bcbe-5f9c9daaae46,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4eccacbeea6a58cc9c575f2c2bf5f8297029f9c9d2a9264bcf3e69644b4c28,PodSandboxId:e49ea2b58308c6c0b9b2908ae1ab6a5818f361d3a75849eac0ab8eb63fab41ca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_EXITED,CreatedAt:1746900143518984605,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sxk9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f4ab1a-93f7-4c1e-bcbe-5f9c9daaae46,},Annotations:map[string]string{io.kubernet
es.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ddf6914642a098d580c48db641460c4197df74a06bf7008e362f610f185934d,PodSandboxId:00c4138d2ab0d3a6880991ae6ca2f7c7e3c2de33b60a469043a91f7f8adef12d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1746900143498396555,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7d9372-7c9e-444b-a628-0dfc4003f07d,},Annotations:map[string]string{io.kubernetes.container.
hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67bef24b725ebf7a2b7f343d7456516d6b5de38118f9cf48e7d70d9146ce2087,PodSandboxId:e30af250008246b61b90a3718d1c328f2984559c29b8526e0386129454a98b4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1746900143526731809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-t4rcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1c5c10-5db3-43e0-935a-0549799273f3,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.con
tainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5879bea6c3a25517766471c3eec758ce0c6d853db7055e1f3505263a674ed969,PodSandboxId:2cc3ee9d3458fbdf619a3c176b445eff63eefe6d42ab071484b6ca448013de07,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1746900136904565424,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-581506,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe62e8740903c7a0badf385e7524512e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74fd0b7de642965eb7e03cf324017cb2195034685758e46efbd5e6997aba9ae5,PodSandboxId:45aa7f96fbe49dd74e9cdfcc97884ce5caba88b39b6e9b00f2357661ecbba1a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_EXITED,CreatedAt:1746900136908093042,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functiona
l-581506,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2dc81ade1bbda73868f61223889f8f4,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc42d63e6220a437de1d056d765ed97df2e6978798401b10283f61c7b1bc895b,PodSandboxId:6ed00def2c968d5a51634c7dafc6e6cc749b20e361a2365659842d41ca79ff9c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,State:CONTAINER_EXITED,CreatedAt:1746900136856417357,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-581506,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a886f34999ac0d6b56a638cab77f640,},Annotations:map[string]string{io.kubernetes.container.hash: fd54b99d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8aa8f6d3-1169-4abc-a6a6-ef4e16d4ac70 name=/runtime.v1.RuntimeService/ListContainers
	May 10 18:21:07 functional-581506 crio[5891]: time="2025-05-10 18:21:07.383731861Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6f41aa1d-a0d1-4e19-9e0d-81b4f15b0e9f name=/runtime.v1.RuntimeService/Version
	May 10 18:21:07 functional-581506 crio[5891]: time="2025-05-10 18:21:07.383852984Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6f41aa1d-a0d1-4e19-9e0d-81b4f15b0e9f name=/runtime.v1.RuntimeService/Version
	May 10 18:21:07 functional-581506 crio[5891]: time="2025-05-10 18:21:07.386841879Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=94d5b021-7d3e-410f-93a4-77ccc93a6fca name=/runtime.v1.ImageService/ImageFsInfo
	May 10 18:21:07 functional-581506 crio[5891]: time="2025-05-10 18:21:07.387502775Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746901267387476862,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:190727,},InodesUsed:&UInt64Value{Value:98,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=94d5b021-7d3e-410f-93a4-77ccc93a6fca name=/runtime.v1.ImageService/ImageFsInfo
	May 10 18:21:07 functional-581506 crio[5891]: time="2025-05-10 18:21:07.388303075Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c03b8cfe-6efc-4c37-b6c3-42eaac6b1623 name=/runtime.v1.RuntimeService/ListContainers
	May 10 18:21:07 functional-581506 crio[5891]: time="2025-05-10 18:21:07.388369223Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c03b8cfe-6efc-4c37-b6c3-42eaac6b1623 name=/runtime.v1.RuntimeService/ListContainers
	May 10 18:21:07 functional-581506 crio[5891]: time="2025-05-10 18:21:07.388614016Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bc59d73bfa2bf2b1a39e797a7d2b573e644354a5079881f3dd26cec1c252aba,PodSandboxId:ac1fe88b05f85fd3070ec6db14c318d8b19cd922062770aa6fc6b88cf2bc0f14,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1746900274095849002,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-t4rcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1c5c10-5db3-43e0-935a-0549799273f3,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca40d958630336ad5282e3e644a344eb6222b09f601d44b816dbc17429e58924,PodSandboxId:0c91495cd04f27933de8b107c48b7ad6314a49c58ac7a22c6acb1832e85de258,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,State:CONTAINER_RUNNING,CreatedAt:1746900270663661287,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-581506,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 65ecf0b12922dcc8259f7d51baab7e18,},Annotations:map[string]string{io.kubernetes.container.hash: 2e2dc675,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206d421221f482411c4e5a5ef3f7102eccd8b38f07c242446855962f9958f985,PodSandboxId:a71305dd0a11cb4fec07b8ecece405b394e529aed22658f28110cc632eb39534,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1746900267571783633,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: ea7d9372-7c9e-444b-a628-0dfc4003f07d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bad506e8de60f9ec83d122523ca19234a72175234ffd3433d02684eb651ce9d,PodSandboxId:ad6bf190d55676203ab65df23981cd676ca08ed2bc2eef1dd05517d694c7e66e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_RUNNING,CreatedAt:1746900267584333315,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-581506,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: b2dc81ade1bbda73868f61223889f8f4,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1002d7979feaa7a0860a8934e8992ae4fdc369b64f2a34d3a93bf01f4e8015e3,PodSandboxId:1713a07d44b66f7d807e2bd691e25e7ecdd6e7c5d84c1261729e464047a1a031,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1746900267652578505,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-581506,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe62e874
0903c7a0badf385e7524512e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c908b9ef4e2dd4afa4d8c8077af1366569126a578de927d95d14f07813040bab,PodSandboxId:da31c3a5af7bf008afa7c113669c143c8daf56d21cd077d4cf6dc85664b412de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_RUNNING,CreatedAt:1746900267384004977,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sxk9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f4ab1a-93f7-4c1e-bcbe-5f9c9daaae46,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4eccacbeea6a58cc9c575f2c2bf5f8297029f9c9d2a9264bcf3e69644b4c28,PodSandboxId:e49ea2b58308c6c0b9b2908ae1ab6a5818f361d3a75849eac0ab8eb63fab41ca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_EXITED,CreatedAt:1746900143518984605,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sxk9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f4ab1a-93f7-4c1e-bcbe-5f9c9daaae46,},Annotations:map[string]string{io.kubernet
es.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ddf6914642a098d580c48db641460c4197df74a06bf7008e362f610f185934d,PodSandboxId:00c4138d2ab0d3a6880991ae6ca2f7c7e3c2de33b60a469043a91f7f8adef12d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1746900143498396555,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7d9372-7c9e-444b-a628-0dfc4003f07d,},Annotations:map[string]string{io.kubernetes.container.
hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67bef24b725ebf7a2b7f343d7456516d6b5de38118f9cf48e7d70d9146ce2087,PodSandboxId:e30af250008246b61b90a3718d1c328f2984559c29b8526e0386129454a98b4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1746900143526731809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-t4rcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1c5c10-5db3-43e0-935a-0549799273f3,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.con
tainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5879bea6c3a25517766471c3eec758ce0c6d853db7055e1f3505263a674ed969,PodSandboxId:2cc3ee9d3458fbdf619a3c176b445eff63eefe6d42ab071484b6ca448013de07,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1746900136904565424,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-581506,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe62e8740903c7a0badf385e7524512e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74fd0b7de642965eb7e03cf324017cb2195034685758e46efbd5e6997aba9ae5,PodSandboxId:45aa7f96fbe49dd74e9cdfcc97884ce5caba88b39b6e9b00f2357661ecbba1a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_EXITED,CreatedAt:1746900136908093042,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functiona
l-581506,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2dc81ade1bbda73868f61223889f8f4,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc42d63e6220a437de1d056d765ed97df2e6978798401b10283f61c7b1bc895b,PodSandboxId:6ed00def2c968d5a51634c7dafc6e6cc749b20e361a2365659842d41ca79ff9c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,State:CONTAINER_EXITED,CreatedAt:1746900136856417357,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-581506,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a886f34999ac0d6b56a638cab77f640,},Annotations:map[string]string{io.kubernetes.container.hash: fd54b99d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c03b8cfe-6efc-4c37-b6c3-42eaac6b1623 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2bc59d73bfa2b       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b   16 minutes ago      Running             coredns                   2                   ac1fe88b05f85       coredns-674b8bbfcf-t4rcv
	ca40d95863033       6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4   16 minutes ago      Running             kube-apiserver            0                   0c91495cd04f2       kube-apiserver-functional-581506
	1002d7979feaa       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1   16 minutes ago      Running             etcd                      2                   1713a07d44b66       etcd-functional-581506
	5bad506e8de60       1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02   16 minutes ago      Running             kube-controller-manager   2                   ad6bf190d5567       kube-controller-manager-functional-581506
	206d421221f48       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       4                   a71305dd0a11c       storage-provisioner
	c908b9ef4e2dd       f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68   16 minutes ago      Running             kube-proxy                2                   da31c3a5af7bf       kube-proxy-sxk9c
	67bef24b725eb       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b   18 minutes ago      Exited              coredns                   1                   e30af25000824       coredns-674b8bbfcf-t4rcv
	2b4eccacbeea6       f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68   18 minutes ago      Exited              kube-proxy                1                   e49ea2b58308c       kube-proxy-sxk9c
	9ddf6914642a0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Exited              storage-provisioner       3                   00c4138d2ab0d       storage-provisioner
	74fd0b7de6429       1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02   18 minutes ago      Exited              kube-controller-manager   1                   45aa7f96fbe49       kube-controller-manager-functional-581506
	5879bea6c3a25       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1   18 minutes ago      Exited              etcd                      1                   2cc3ee9d3458f       etcd-functional-581506
	bc42d63e6220a       8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4   18 minutes ago      Exited              kube-scheduler            1                   6ed00def2c968       kube-scheduler-functional-581506
	
	
	==> coredns [2bc59d73bfa2bf2b1a39e797a7d2b573e644354a5079881f3dd26cec1c252aba] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] 127.0.0.1:47540 - 49812 "HINFO IN 3817603910003911590.6949861336943334396. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.032959679s
	
	
	==> coredns [67bef24b725ebf7a2b7f343d7456516d6b5de38118f9cf48e7d70d9146ce2087] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] 127.0.0.1:59748 - 21482 "HINFO IN 2761340015405739266.7136990693185190550. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022015892s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-581506
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-581506
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e96c83983357cd8557f3cdfe077a25cc73d485a4
	                    minikube.k8s.io/name=functional-581506
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_05_10T18_01_15_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 May 2025 18:01:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-581506
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 May 2025 18:21:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 May 2025 18:19:08 +0000   Sat, 10 May 2025 18:01:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 May 2025 18:19:08 +0000   Sat, 10 May 2025 18:01:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 May 2025 18:19:08 +0000   Sat, 10 May 2025 18:01:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 May 2025 18:19:08 +0000   Sat, 10 May 2025 18:01:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.52
	  Hostname:    functional-581506
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912748Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912748Ki
	  pods:               110
	System Info:
	  Machine ID:                 78012ce40601437bb4c2db7efb9be33a
	  System UUID:                78012ce4-0601-437b-b4c2-db7efb9be33a
	  Boot ID:                    832a94bf-8db0-4adf-aef4-977728fcc1b7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2024.11.2
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.33.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-674b8bbfcf-t4rcv                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     19m
	  kube-system                 etcd-functional-581506                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         19m
	  kube-system                 kube-apiserver-functional-581506             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-functional-581506    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-sxk9c                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-functional-581506             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 16m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node functional-581506 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node functional-581506 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m                kubelet          Node functional-581506 status is now: NodeHasSufficientPID
	  Normal  NodeReady                19m                kubelet          Node functional-581506 status is now: NodeReady
	  Normal  RegisteredNode           19m                node-controller  Node functional-581506 event: Registered Node functional-581506 in Controller
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node functional-581506 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node functional-581506 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node functional-581506 status is now: NodeHasSufficientPID
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           18m                node-controller  Node functional-581506 event: Registered Node functional-581506 in Controller
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node functional-581506 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node functional-581506 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node functional-581506 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m                node-controller  Node functional-581506 event: Registered Node functional-581506 in Controller
	
	
	==> dmesg <==
	[May10 18:00] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.000002] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.001507] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000567] (rpcbind)[143]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.143993] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000004] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.090211] kauditd_printk_skb: 1 callbacks suppressed
	[May10 18:01] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.148940] kauditd_printk_skb: 67 callbacks suppressed
	[  +0.675797] kauditd_printk_skb: 19 callbacks suppressed
	[ +10.795650] kauditd_printk_skb: 76 callbacks suppressed
	[ +20.864703] kauditd_printk_skb: 22 callbacks suppressed
	[May10 18:02] kauditd_printk_skb: 34 callbacks suppressed
	[  +4.648687] kauditd_printk_skb: 132 callbacks suppressed
	[  +5.789009] kauditd_printk_skb: 9 callbacks suppressed
	[ +13.341647] kauditd_printk_skb: 12 callbacks suppressed
	[May10 18:04] kauditd_printk_skb: 90 callbacks suppressed
	[  +1.054815] kauditd_printk_skb: 130 callbacks suppressed
	[  +0.904906] kauditd_printk_skb: 16 callbacks suppressed
	[May10 18:08] kauditd_printk_skb: 22 callbacks suppressed
	
	
	==> etcd [1002d7979feaa7a0860a8934e8992ae4fdc369b64f2a34d3a93bf01f4e8015e3] <==
	{"level":"info","ts":"2025-05-10T18:04:30.584641Z","caller":"embed/etcd.go:908","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-05-10T18:04:30.584794Z","caller":"embed/etcd.go:633","msg":"serving peer traffic","address":"192.168.39.52:2380"}
	{"level":"info","ts":"2025-05-10T18:04:30.584823Z","caller":"embed/etcd.go:603","msg":"cmux::serve","address":"192.168.39.52:2380"}
	{"level":"info","ts":"2025-05-10T18:04:31.534989Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 is starting a new election at term 3"}
	{"level":"info","ts":"2025-05-10T18:04:31.535049Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 became pre-candidate at term 3"}
	{"level":"info","ts":"2025-05-10T18:04:31.535080Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 received MsgPreVoteResp from 3baf479dc31b93a9 at term 3"}
	{"level":"info","ts":"2025-05-10T18:04:31.535099Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 became candidate at term 4"}
	{"level":"info","ts":"2025-05-10T18:04:31.535152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 received MsgVoteResp from 3baf479dc31b93a9 at term 4"}
	{"level":"info","ts":"2025-05-10T18:04:31.535163Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 became leader at term 4"}
	{"level":"info","ts":"2025-05-10T18:04:31.535174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3baf479dc31b93a9 elected leader 3baf479dc31b93a9 at term 4"}
	{"level":"info","ts":"2025-05-10T18:04:31.541728Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"3baf479dc31b93a9","local-member-attributes":"{Name:functional-581506 ClientURLs:[https://192.168.39.52:2379]}","request-path":"/0/members/3baf479dc31b93a9/attributes","cluster-id":"26c9414d925de00c","publish-timeout":"7s"}
	{"level":"info","ts":"2025-05-10T18:04:31.541955Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T18:04:31.542039Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T18:04:31.542717Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T18:04:31.544923Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-05-10T18:04:31.544977Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-05-10T18:04:31.545343Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T18:04:31.545926Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-05-10T18:04:31.550131Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.52:2379"}
	{"level":"info","ts":"2025-05-10T18:14:31.638045Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":969}
	{"level":"info","ts":"2025-05-10T18:14:31.648248Z","caller":"mvcc/kvstore_compaction.go:71","msg":"finished scheduled compaction","compact-revision":969,"took":"9.559967ms","hash":1876746662,"current-db-size-bytes":3153920,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":3153920,"current-db-size-in-use":"3.2 MB"}
	{"level":"info","ts":"2025-05-10T18:14:31.648346Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1876746662,"revision":969,"compact-revision":-1}
	{"level":"info","ts":"2025-05-10T18:19:31.645712Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1211}
	{"level":"info","ts":"2025-05-10T18:19:31.648801Z","caller":"mvcc/kvstore_compaction.go:71","msg":"finished scheduled compaction","compact-revision":1211,"took":"2.735142ms","hash":3938549745,"current-db-size-bytes":3153920,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":2113536,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2025-05-10T18:19:31.648854Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3938549745,"revision":1211,"compact-revision":969}
	
	
	==> etcd [5879bea6c3a25517766471c3eec758ce0c6d853db7055e1f3505263a674ed969] <==
	{"level":"info","ts":"2025-05-10T18:02:21.030061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-05-10T18:02:21.030106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 received MsgPreVoteResp from 3baf479dc31b93a9 at term 2"}
	{"level":"info","ts":"2025-05-10T18:02:21.030146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 became candidate at term 3"}
	{"level":"info","ts":"2025-05-10T18:02:21.030207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 received MsgVoteResp from 3baf479dc31b93a9 at term 3"}
	{"level":"info","ts":"2025-05-10T18:02:21.030228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 became leader at term 3"}
	{"level":"info","ts":"2025-05-10T18:02:21.030247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3baf479dc31b93a9 elected leader 3baf479dc31b93a9 at term 3"}
	{"level":"info","ts":"2025-05-10T18:02:21.038152Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"3baf479dc31b93a9","local-member-attributes":"{Name:functional-581506 ClientURLs:[https://192.168.39.52:2379]}","request-path":"/0/members/3baf479dc31b93a9/attributes","cluster-id":"26c9414d925de00c","publish-timeout":"7s"}
	{"level":"info","ts":"2025-05-10T18:02:21.038369Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T18:02:21.041197Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T18:02:21.041717Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T18:02:21.044437Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-05-10T18:02:21.044826Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T18:02:21.052743Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.52:2379"}
	{"level":"info","ts":"2025-05-10T18:02:21.064014Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-05-10T18:02:21.065946Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-05-10T18:02:47.594480Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-05-10T18:02:47.594538Z","caller":"embed/etcd.go:408","msg":"closing etcd server","name":"functional-581506","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.52:2380"],"advertise-client-urls":["https://192.168.39.52:2379"]}
	{"level":"warn","ts":"2025-05-10T18:02:47.692253Z","caller":"embed/serve.go:235","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.52:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-05-10T18:02:47.692414Z","caller":"embed/serve.go:237","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.52:2379: use of closed network connection"}
	{"level":"info","ts":"2025-05-10T18:02:47.692332Z","caller":"etcdserver/server.go:1546","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"3baf479dc31b93a9","current-leader-member-id":"3baf479dc31b93a9"}
	{"level":"warn","ts":"2025-05-10T18:02:47.692493Z","caller":"embed/serve.go:235","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-05-10T18:02:47.692590Z","caller":"embed/serve.go:237","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-05-10T18:02:47.696041Z","caller":"embed/etcd.go:613","msg":"stopping serving peer traffic","address":"192.168.39.52:2380"}
	{"level":"info","ts":"2025-05-10T18:02:47.696298Z","caller":"embed/etcd.go:618","msg":"stopped serving peer traffic","address":"192.168.39.52:2380"}
	{"level":"info","ts":"2025-05-10T18:02:47.696390Z","caller":"embed/etcd.go:410","msg":"closed etcd server","name":"functional-581506","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.52:2380"],"advertise-client-urls":["https://192.168.39.52:2379"]}
	
	
	==> kernel <==
	 18:21:07 up 20 min,  0 user,  load average: 0.37, 0.26, 0.16
	Linux functional-581506 5.10.207 #1 SMP Fri May 9 03:49:24 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2024.11.2"
	
	
	==> kube-apiserver [ca40d958630336ad5282e3e644a344eb6222b09f601d44b816dbc17429e58924] <==
	I0510 18:04:32.992976       1 shared_informer.go:357] "Caches are synced" controller="node_authorizer"
	I0510 18:04:33.822508       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0510 18:04:33.885749       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0510 18:04:35.065207       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0510 18:04:35.106821       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0510 18:04:35.136969       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0510 18:04:35.144671       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0510 18:04:36.229591       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0510 18:04:36.517419       1 controller.go:667] quota admission added evaluator for: endpoints
	I0510 18:04:36.581373       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 18:04:36.669214       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0510 18:08:43.916202       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 18:08:43.922038       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.104.141.104"}
	I0510 18:08:47.337692       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 18:08:48.351686       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.104.34.72"}
	I0510 18:08:48.356985       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 18:08:49.106605       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 18:08:49.110056       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.108.45.176"}
	I0510 18:08:54.502666       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 18:08:54.508618       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.228.11"}
	I0510 18:14:32.894709       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 18:16:07.436962       1 controller.go:667] quota admission added evaluator for: namespaces
	I0510 18:16:07.748671       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.114.28"}
	I0510 18:16:07.755311       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 18:16:07.788681       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.60.139"}
	
	
	==> kube-controller-manager [5bad506e8de60f9ec83d122523ca19234a72175234ffd3433d02684eb651ce9d] <==
	I0510 18:04:36.183471       1 shared_informer.go:357] "Caches are synced" controller="PVC protection"
	I0510 18:04:36.186114       1 shared_informer.go:357] "Caches are synced" controller="GC"
	I0510 18:04:36.188512       1 shared_informer.go:357] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0510 18:04:36.213570       1 shared_informer.go:357] "Caches are synced" controller="ephemeral"
	I0510 18:04:36.215649       1 shared_informer.go:357] "Caches are synced" controller="endpoint_slice_mirroring"
	I0510 18:04:36.228498       1 shared_informer.go:357] "Caches are synced" controller="ReplicaSet"
	I0510 18:04:36.233748       1 shared_informer.go:357] "Caches are synced" controller="stateful set"
	I0510 18:04:36.311659       1 shared_informer.go:357] "Caches are synced" controller="daemon sets"
	I0510 18:04:36.312725       1 shared_informer.go:357] "Caches are synced" controller="attach detach"
	I0510 18:04:36.380908       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 18:04:36.428129       1 shared_informer.go:357] "Caches are synced" controller="service account"
	I0510 18:04:36.472582       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 18:04:36.518918       1 shared_informer.go:357] "Caches are synced" controller="namespace"
	I0510 18:04:36.901758       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0510 18:04:36.901798       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0510 18:04:36.901804       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0510 18:04:36.904012       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	E0510 18:16:07.562224       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 18:16:07.570114       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 18:16:07.576704       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 18:16:07.584223       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 18:16:07.591210       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 18:16:07.595649       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 18:16:07.607783       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 18:16:07.608070       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [74fd0b7de642965eb7e03cf324017cb2195034685758e46efbd5e6997aba9ae5] <==
	I0510 18:02:26.095808       1 shared_informer.go:357] "Caches are synced" controller="service account"
	I0510 18:02:26.107982       1 shared_informer.go:357] "Caches are synced" controller="endpoint_slice"
	I0510 18:02:26.113575       1 shared_informer.go:357] "Caches are synced" controller="namespace"
	I0510 18:02:26.114829       1 shared_informer.go:357] "Caches are synced" controller="ReplicaSet"
	I0510 18:02:26.118078       1 shared_informer.go:357] "Caches are synced" controller="cronjob"
	I0510 18:02:26.121663       1 shared_informer.go:357] "Caches are synced" controller="daemon sets"
	I0510 18:02:26.127470       1 shared_informer.go:357] "Caches are synced" controller="deployment"
	I0510 18:02:26.128559       1 shared_informer.go:357] "Caches are synced" controller="job"
	I0510 18:02:26.135167       1 shared_informer.go:357] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0510 18:02:26.140926       1 shared_informer.go:357] "Caches are synced" controller="taint-eviction-controller"
	I0510 18:02:26.149406       1 shared_informer.go:357] "Caches are synced" controller="ClusterRoleAggregator"
	I0510 18:02:26.194556       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrapproving"
	I0510 18:02:26.234159       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0510 18:02:26.234341       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0510 18:02:26.234405       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0510 18:02:26.234431       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0510 18:02:26.248426       1 shared_informer.go:357] "Caches are synced" controller="HPA"
	I0510 18:02:26.262947       1 shared_informer.go:357] "Caches are synced" controller="persistent volume"
	I0510 18:02:26.312142       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 18:02:26.393774       1 shared_informer.go:357] "Caches are synced" controller="attach detach"
	I0510 18:02:26.402315       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 18:02:26.844411       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0510 18:02:26.844453       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0510 18:02:26.844461       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0510 18:02:26.854122       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [2b4eccacbeea6a58cc9c575f2c2bf5f8297029f9c9d2a9264bcf3e69644b4c28] <==
	E0510 18:02:23.839411       1 proxier.go:732] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0510 18:02:23.859564       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.39.52"]
	E0510 18:02:23.859640       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0510 18:02:23.913819       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0510 18:02:23.913976       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0510 18:02:23.914004       1 server_linux.go:145] "Using iptables Proxier"
	I0510 18:02:23.928588       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0510 18:02:23.928908       1 server.go:516] "Version info" version="v1.33.0"
	I0510 18:02:23.928939       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 18:02:23.939076       1 config.go:199] "Starting service config controller"
	I0510 18:02:23.939113       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0510 18:02:23.939140       1 config.go:105] "Starting endpoint slice config controller"
	I0510 18:02:23.939144       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0510 18:02:23.939155       1 config.go:440] "Starting serviceCIDR config controller"
	I0510 18:02:23.939158       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0510 18:02:23.939762       1 config.go:329] "Starting node config controller"
	I0510 18:02:23.939818       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0510 18:02:24.039379       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	I0510 18:02:24.039423       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0510 18:02:24.039626       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0510 18:02:24.040552       1 shared_informer.go:357] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [c908b9ef4e2dd4afa4d8c8077af1366569126a578de927d95d14f07813040bab] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0510 18:04:28.043194       1 server.go:704] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-581506\": dial tcp 192.168.39.52:8441: connect: connection refused"
	E0510 18:04:29.208643       1 server.go:704] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-581506\": dial tcp 192.168.39.52:8441: connect: connection refused"
	I0510 18:04:32.936969       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.39.52"]
	E0510 18:04:32.937354       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0510 18:04:33.046754       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0510 18:04:33.046916       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0510 18:04:33.046981       1 server_linux.go:145] "Using iptables Proxier"
	I0510 18:04:33.060194       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0510 18:04:33.060559       1 server.go:516] "Version info" version="v1.33.0"
	I0510 18:04:33.060762       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 18:04:33.065256       1 config.go:199] "Starting service config controller"
	I0510 18:04:33.068415       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0510 18:04:33.068567       1 config.go:105] "Starting endpoint slice config controller"
	I0510 18:04:33.068590       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0510 18:04:33.068693       1 config.go:440] "Starting serviceCIDR config controller"
	I0510 18:04:33.073303       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0510 18:04:33.073374       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	I0510 18:04:33.068977       1 config.go:329] "Starting node config controller"
	I0510 18:04:33.073428       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0510 18:04:33.169005       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0510 18:04:33.169120       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0510 18:04:33.173643       1 shared_informer.go:357] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [bc42d63e6220a437de1d056d765ed97df2e6978798401b10283f61c7b1bc895b] <==
	I0510 18:02:21.278714       1 serving.go:386] Generated self-signed cert in-memory
	W0510 18:02:22.765853       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0510 18:02:22.766075       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0510 18:02:22.766103       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0510 18:02:22.766193       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0510 18:02:22.804654       1 server.go:171] "Starting Kubernetes Scheduler" version="v1.33.0"
	I0510 18:02:22.804770       1 server.go:173] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 18:02:22.806849       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0510 18:02:22.807232       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 18:02:22.807327       1 shared_informer.go:350] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 18:02:22.807360       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0510 18:02:22.907841       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0510 18:02:47.604715       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 10 18:20:29 functional-581506 kubelet[6715]: E0510 18:20:29.943056    6715 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod0b1c5c10-5db3-43e0-935a-0549799273f3/crio-e30af250008246b61b90a3718d1c328f2984559c29b8526e0386129454a98b4a: Error finding container e30af250008246b61b90a3718d1c328f2984559c29b8526e0386129454a98b4a: Status 404 returned error can't find the container with id e30af250008246b61b90a3718d1c328f2984559c29b8526e0386129454a98b4a
	May 10 18:20:29 functional-581506 kubelet[6715]: E0510 18:20:29.943181    6715 manager.go:1116] Failed to create existing container: /kubepods/burstable/podb2dc81ade1bbda73868f61223889f8f4/crio-45aa7f96fbe49dd74e9cdfcc97884ce5caba88b39b6e9b00f2357661ecbba1a3: Error finding container 45aa7f96fbe49dd74e9cdfcc97884ce5caba88b39b6e9b00f2357661ecbba1a3: Status 404 returned error can't find the container with id 45aa7f96fbe49dd74e9cdfcc97884ce5caba88b39b6e9b00f2357661ecbba1a3
	May 10 18:20:29 functional-581506 kubelet[6715]: E0510 18:20:29.943333    6715 manager.go:1116] Failed to create existing container: /kubepods/besteffort/podc3f4ab1a-93f7-4c1e-bcbe-5f9c9daaae46/crio-e49ea2b58308c6c0b9b2908ae1ab6a5818f361d3a75849eac0ab8eb63fab41ca: Error finding container e49ea2b58308c6c0b9b2908ae1ab6a5818f361d3a75849eac0ab8eb63fab41ca: Status 404 returned error can't find the container with id e49ea2b58308c6c0b9b2908ae1ab6a5818f361d3a75849eac0ab8eb63fab41ca
	May 10 18:20:29 functional-581506 kubelet[6715]: E0510 18:20:29.943679    6715 manager.go:1116] Failed to create existing container: /kubepods/besteffort/podea7d9372-7c9e-444b-a628-0dfc4003f07d/crio-00c4138d2ab0d3a6880991ae6ca2f7c7e3c2de33b60a469043a91f7f8adef12d: Error finding container 00c4138d2ab0d3a6880991ae6ca2f7c7e3c2de33b60a469043a91f7f8adef12d: Status 404 returned error can't find the container with id 00c4138d2ab0d3a6880991ae6ca2f7c7e3c2de33b60a469043a91f7f8adef12d
	May 10 18:20:29 functional-581506 kubelet[6715]: E0510 18:20:29.943790    6715 manager.go:1116] Failed to create existing container: /kubepods/burstable/podfe62e8740903c7a0badf385e7524512e/crio-2cc3ee9d3458fbdf619a3c176b445eff63eefe6d42ab071484b6ca448013de07: Error finding container 2cc3ee9d3458fbdf619a3c176b445eff63eefe6d42ab071484b6ca448013de07: Status 404 returned error can't find the container with id 2cc3ee9d3458fbdf619a3c176b445eff63eefe6d42ab071484b6ca448013de07
	May 10 18:20:30 functional-581506 kubelet[6715]: E0510 18:20:30.211391    6715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746901230211002381,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:190727,},InodesUsed:&UInt64Value{Value:98,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:20:30 functional-581506 kubelet[6715]: E0510 18:20:30.211631    6715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746901230211002381,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:190727,},InodesUsed:&UInt64Value{Value:98,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:20:36 functional-581506 kubelet[6715]: E0510 18:20:36.832922    6715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-581506_kube-system_0a886f34999ac0d6b56a638cab77f640_2\" already exists"
	May 10 18:20:36 functional-581506 kubelet[6715]: E0510 18:20:36.833428    6715 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-581506_kube-system_0a886f34999ac0d6b56a638cab77f640_2\" already exists" pod="kube-system/kube-scheduler-functional-581506"
	May 10 18:20:36 functional-581506 kubelet[6715]: E0510 18:20:36.833513    6715 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-581506_kube-system_0a886f34999ac0d6b56a638cab77f640_2\" already exists" pod="kube-system/kube-scheduler-functional-581506"
	May 10 18:20:36 functional-581506 kubelet[6715]: E0510 18:20:36.833627    6715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-functional-581506_kube-system(0a886f34999ac0d6b56a638cab77f640)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-functional-581506_kube-system(0a886f34999ac0d6b56a638cab77f640)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-functional-581506_kube-system_0a886f34999ac0d6b56a638cab77f640_2\\\" already exists\"" pod="kube-system/kube-scheduler-functional-581506" podUID="0a886f34999ac0d6b56a638cab77f640"
	May 10 18:20:40 functional-581506 kubelet[6715]: E0510 18:20:40.213445    6715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746901240213061643,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:190727,},InodesUsed:&UInt64Value{Value:98,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:20:40 functional-581506 kubelet[6715]: E0510 18:20:40.213995    6715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746901240213061643,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:190727,},InodesUsed:&UInt64Value{Value:98,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:20:47 functional-581506 kubelet[6715]: E0510 18:20:47.830771    6715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-581506_kube-system_0a886f34999ac0d6b56a638cab77f640_2\" already exists"
	May 10 18:20:47 functional-581506 kubelet[6715]: E0510 18:20:47.831328    6715 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-581506_kube-system_0a886f34999ac0d6b56a638cab77f640_2\" already exists" pod="kube-system/kube-scheduler-functional-581506"
	May 10 18:20:47 functional-581506 kubelet[6715]: E0510 18:20:47.831390    6715 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-581506_kube-system_0a886f34999ac0d6b56a638cab77f640_2\" already exists" pod="kube-system/kube-scheduler-functional-581506"
	May 10 18:20:47 functional-581506 kubelet[6715]: E0510 18:20:47.831503    6715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-functional-581506_kube-system(0a886f34999ac0d6b56a638cab77f640)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-functional-581506_kube-system(0a886f34999ac0d6b56a638cab77f640)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-functional-581506_kube-system_0a886f34999ac0d6b56a638cab77f640_2\\\" already exists\"" pod="kube-system/kube-scheduler-functional-581506" podUID="0a886f34999ac0d6b56a638cab77f640"
	May 10 18:20:50 functional-581506 kubelet[6715]: E0510 18:20:50.218220    6715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746901250217495620,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:190727,},InodesUsed:&UInt64Value{Value:98,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:20:50 functional-581506 kubelet[6715]: E0510 18:20:50.218264    6715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746901250217495620,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:190727,},InodesUsed:&UInt64Value{Value:98,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:21:00 functional-581506 kubelet[6715]: E0510 18:21:00.220759    6715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746901260220377883,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:190727,},InodesUsed:&UInt64Value{Value:98,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:21:00 functional-581506 kubelet[6715]: E0510 18:21:00.220791    6715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746901260220377883,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:190727,},InodesUsed:&UInt64Value{Value:98,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:21:01 functional-581506 kubelet[6715]: E0510 18:21:01.830475    6715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-581506_kube-system_0a886f34999ac0d6b56a638cab77f640_2\" already exists"
	May 10 18:21:01 functional-581506 kubelet[6715]: E0510 18:21:01.830812    6715 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-581506_kube-system_0a886f34999ac0d6b56a638cab77f640_2\" already exists" pod="kube-system/kube-scheduler-functional-581506"
	May 10 18:21:01 functional-581506 kubelet[6715]: E0510 18:21:01.830953    6715 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-581506_kube-system_0a886f34999ac0d6b56a638cab77f640_2\" already exists" pod="kube-system/kube-scheduler-functional-581506"
	May 10 18:21:01 functional-581506 kubelet[6715]: E0510 18:21:01.831070    6715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-functional-581506_kube-system(0a886f34999ac0d6b56a638cab77f640)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-functional-581506_kube-system(0a886f34999ac0d6b56a638cab77f640)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-functional-581506_kube-system_0a886f34999ac0d6b56a638cab77f640_2\\\" already exists\"" pod="kube-system/kube-scheduler-functional-581506" podUID="0a886f34999ac0d6b56a638cab77f640"
	
	
	==> storage-provisioner [206d421221f482411c4e5a5ef3f7102eccd8b38f07c242446855962f9958f985] <==
	W0510 18:20:42.736659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:20:44.740569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:20:44.745623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:20:46.748634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:20:46.753791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:20:48.758149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:20:48.767943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:20:50.771328       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:20:50.778031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:20:52.780817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:20:52.792470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:20:54.795747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:20:54.801789       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:20:56.805088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:20:56.814161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:20:58.817565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:20:58.826943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:21:00.831091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:21:00.836605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:21:02.840027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:21:02.849611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:21:04.852466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:21:04.857658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:21:06.862574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:21:06.872340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [9ddf6914642a098d580c48db641460c4197df74a06bf7008e362f610f185934d] <==
	I0510 18:02:23.672106       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0510 18:02:23.683594       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0510 18:02:23.683625       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0510 18:02:23.702833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:02:27.159140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:02:31.422834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:02:35.021700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:02:38.075295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:02:41.098182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:02:41.109385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0510 18:02:41.109555       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0510 18:02:41.109770       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-581506_a46214da-4c1e-4fc3-976f-44d996fb2ca3!
	I0510 18:02:41.110126       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fd69f3b7-01e0-4535-950c-10464666b122", APIVersion:"v1", ResourceVersion:"525", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-581506_a46214da-4c1e-4fc3-976f-44d996fb2ca3 became leader
	W0510 18:02:41.126141       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:02:41.133416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0510 18:02:41.210982       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-581506_a46214da-4c1e-4fc3-976f-44d996fb2ca3!
	W0510 18:02:43.137335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:02:43.144935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:02:45.148031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:02:45.154106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:02:47.157824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:02:47.175021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-581506 -n functional-581506
helpers_test.go:261: (dbg) Run:  kubectl --context functional-581506 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount hello-node-connect-58f9cf68d8-prxzn hello-node-fcfd88b6f-gmdwq mysql-58ccfd96bb-2jm87 sp-pod dashboard-metrics-scraper-5d59dccf9b-w9spf kubernetes-dashboard-7779f9b69b-ljpkm
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-581506 describe pod busybox-mount hello-node-connect-58f9cf68d8-prxzn hello-node-fcfd88b6f-gmdwq mysql-58ccfd96bb-2jm87 sp-pod dashboard-metrics-scraper-5d59dccf9b-w9spf kubernetes-dashboard-7779f9b69b-ljpkm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-581506 describe pod busybox-mount hello-node-connect-58f9cf68d8-prxzn hello-node-fcfd88b6f-gmdwq mysql-58ccfd96bb-2jm87 sp-pod dashboard-metrics-scraper-5d59dccf9b-w9spf kubernetes-dashboard-7779f9b69b-ljpkm: exit status 1 (100.738705ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  mount-munger:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    Environment:  <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t5rkt (ro)
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-t5rkt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             hello-node-connect-58f9cf68d8-prxzn
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=hello-node-connect
	                  pod-template-hash=58f9cf68d8
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-connect-58f9cf68d8
	Containers:
	  echoserver:
	    Image:        registry.k8s.io/echoserver:1.8
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vjht5 (ro)
	Volumes:
	  kube-api-access-vjht5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             hello-node-fcfd88b6f-gmdwq
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=hello-node
	                  pod-template-hash=fcfd88b6f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-fcfd88b6f
	Containers:
	  echoserver:
	    Image:        registry.k8s.io/echoserver:1.8
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-56j2g (ro)
	Volumes:
	  kube-api-access-56j2g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             mysql-58ccfd96bb-2jm87
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=mysql
	                  pod-template-hash=58ccfd96bb
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/mysql-58ccfd96bb
	Containers:
	  mysql:
	    Image:      docker.io/mysql:5.7
	    Port:       3306/TCP
	    Host Port:  0/TCP
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v72cr (ro)
	Volumes:
	  kube-api-access-v72cr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  myfrontend:
	    Image:        docker.io/nginx
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6q8c7 (ro)
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-6q8c7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5d59dccf9b-w9spf" not found
	Error from server (NotFound): pods "kubernetes-dashboard-7779f9b69b-ljpkm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-581506 describe pod busybox-mount hello-node-connect-58f9cf68d8-prxzn hello-node-fcfd88b6f-gmdwq mysql-58ccfd96bb-2jm87 sp-pod dashboard-metrics-scraper-5d59dccf9b-w9spf kubernetes-dashboard-7779f9b69b-ljpkm: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-581506 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-581506 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-prxzn" [2a0592d4-3774-4ea1-8765-ab0f897d4738] Pending
helpers_test.go:329: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1657: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1657: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-581506 -n functional-581506
functional_test.go:1657: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-05-10 18:18:54.870319516 +0000 UTC m=+1609.197932163
functional_test.go:1657: (dbg) Run:  kubectl --context functional-581506 describe po hello-node-connect-58f9cf68d8-prxzn -n default
functional_test.go:1657: (dbg) kubectl --context functional-581506 describe po hello-node-connect-58f9cf68d8-prxzn -n default:
Name:             hello-node-connect-58f9cf68d8-prxzn
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           app=hello-node-connect
pod-template-hash=58f9cf68d8
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/hello-node-connect-58f9cf68d8
Containers:
echoserver:
Image:        registry.k8s.io/echoserver:1.8
Port:         <none>
Host Port:    <none>
Environment:  <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vjht5 (ro)
Volumes:
kube-api-access-vjht5:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>
functional_test.go:1657: (dbg) Run:  kubectl --context functional-581506 logs hello-node-connect-58f9cf68d8-prxzn -n default
functional_test.go:1657: (dbg) kubectl --context functional-581506 logs hello-node-connect-58f9cf68d8-prxzn -n default:
functional_test.go:1658: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1615: service test failed - dumping debug information
functional_test.go:1616: -----------------------service failure post-mortem--------------------------------
functional_test.go:1619: (dbg) Run:  kubectl --context functional-581506 describe po hello-node-connect
functional_test.go:1623: hello-node pod describe:
Name:             hello-node-connect-58f9cf68d8-prxzn
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           app=hello-node-connect
pod-template-hash=58f9cf68d8
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/hello-node-connect-58f9cf68d8
Containers:
echoserver:
Image:        registry.k8s.io/echoserver:1.8
Port:         <none>
Host Port:    <none>
Environment:  <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vjht5 (ro)
Volumes:
kube-api-access-vjht5:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>

                                                
                                                
functional_test.go:1625: (dbg) Run:  kubectl --context functional-581506 logs -l app=hello-node-connect
functional_test.go:1629: hello-node logs:
functional_test.go:1631: (dbg) Run:  kubectl --context functional-581506 describe svc hello-node-connect
functional_test.go:1635: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.111.228.11
IPs:                      10.111.228.11
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31767/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-581506 -n functional-581506
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-581506 logs -n 25: (1.708814191s)
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|----------------|------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-581506 ssh sudo                                             | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC |                     |
	|                | umount -f /mount-9p                                                    |                   |         |         |                     |                     |
	| mount          | -p functional-581506                                                   | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3783465736/001:/mount1 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                   |         |         |                     |                     |
	| ssh            | functional-581506 ssh findmnt                                          | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC |                     |
	|                | -T /mount1                                                             |                   |         |         |                     |                     |
	| mount          | -p functional-581506                                                   | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3783465736/001:/mount3 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                   |         |         |                     |                     |
	| mount          | -p functional-581506                                                   | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3783465736/001:/mount2 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                   |         |         |                     |                     |
	| ssh            | functional-581506 ssh findmnt                                          | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC | 10 May 25 18:16 UTC |
	|                | -T /mount1                                                             |                   |         |         |                     |                     |
	| ssh            | functional-581506 ssh findmnt                                          | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC | 10 May 25 18:16 UTC |
	|                | -T /mount2                                                             |                   |         |         |                     |                     |
	| ssh            | functional-581506 ssh findmnt                                          | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC | 10 May 25 18:16 UTC |
	|                | -T /mount3                                                             |                   |         |         |                     |                     |
	| mount          | -p functional-581506                                                   | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC |                     |
	|                | --kill=true                                                            |                   |         |         |                     |                     |
	| start          | -p functional-581506                                                   | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC |                     |
	|                | --dry-run --memory                                                     |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                                                |                   |         |         |                     |                     |
	|                | --driver=kvm2                                                          |                   |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                   |         |         |                     |                     |
	| start          | -p functional-581506                                                   | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC |                     |
	|                | --dry-run --alsologtostderr                                            |                   |         |         |                     |                     |
	|                | -v=1 --driver=kvm2                                                     |                   |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                   |         |         |                     |                     |
	| start          | -p functional-581506                                                   | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC |                     |
	|                | --dry-run --memory                                                     |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                                                |                   |         |         |                     |                     |
	|                | --driver=kvm2                                                          |                   |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                                                     | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC |                     |
	|                | -p functional-581506                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                   |         |         |                     |                     |
	| service        | functional-581506 service list                                         | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:18 UTC | 10 May 25 18:18 UTC |
	| service        | functional-581506 service list                                         | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:18 UTC | 10 May 25 18:18 UTC |
	|                | -o json                                                                |                   |         |         |                     |                     |
	| update-context | functional-581506                                                      | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:18 UTC | 10 May 25 18:18 UTC |
	|                | update-context                                                         |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                   |         |         |                     |                     |
	| update-context | functional-581506                                                      | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:18 UTC | 10 May 25 18:18 UTC |
	|                | update-context                                                         |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                   |         |         |                     |                     |
	| update-context | functional-581506                                                      | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:18 UTC | 10 May 25 18:18 UTC |
	|                | update-context                                                         |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                   |         |         |                     |                     |
	| service        | functional-581506 service                                              | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:18 UTC |                     |
	|                | --namespace=default --https                                            |                   |         |         |                     |                     |
	|                | --url hello-node                                                       |                   |         |         |                     |                     |
	| image          | functional-581506                                                      | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:18 UTC | 10 May 25 18:18 UTC |
	|                | image ls --format short                                                |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| image          | functional-581506                                                      | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:18 UTC | 10 May 25 18:18 UTC |
	|                | image ls --format yaml                                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| service        | functional-581506                                                      | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:18 UTC |                     |
	|                | service hello-node --url                                               |                   |         |         |                     |                     |
	|                | --format={{.IP}}                                                       |                   |         |         |                     |                     |
	| ssh            | functional-581506 ssh pgrep                                            | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:18 UTC |                     |
	|                | buildkitd                                                              |                   |         |         |                     |                     |
	| service        | functional-581506 service                                              | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:18 UTC |                     |
	|                | hello-node --url                                                       |                   |         |         |                     |                     |
	| image          | functional-581506 image build -t                                       | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:18 UTC |                     |
	|                | localhost/my-image:functional-581506                                   |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                   |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/05/10 18:16:06
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0510 18:16:06.249127  407218 out.go:345] Setting OutFile to fd 1 ...
	I0510 18:16:06.249230  407218 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 18:16:06.249245  407218 out.go:358] Setting ErrFile to fd 2...
	I0510 18:16:06.249249  407218 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 18:16:06.249538  407218 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-388787/.minikube/bin
	I0510 18:16:06.250056  407218 out.go:352] Setting JSON to false
	I0510 18:16:06.250986  407218 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":28714,"bootTime":1746872252,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 18:16:06.251048  407218 start.go:140] virtualization: kvm guest
	I0510 18:16:06.252905  407218 out.go:177] * [functional-581506] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0510 18:16:06.254379  407218 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 18:16:06.254378  407218 notify.go:220] Checking for updates...
	I0510 18:16:06.255877  407218 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 18:16:06.257250  407218 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-388787/kubeconfig
	I0510 18:16:06.258440  407218 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-388787/.minikube
	I0510 18:16:06.259843  407218 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 18:16:06.261024  407218 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 18:16:06.262455  407218 config.go:182] Loaded profile config "functional-581506": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 18:16:06.262923  407218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 18:16:06.263004  407218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:16:06.279063  407218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38601
	I0510 18:16:06.279680  407218 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:16:06.280465  407218 main.go:141] libmachine: Using API Version  1
	I0510 18:16:06.280504  407218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:16:06.280895  407218 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:16:06.281110  407218 main.go:141] libmachine: (functional-581506) Calling .DriverName
	I0510 18:16:06.281407  407218 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 18:16:06.281717  407218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 18:16:06.281756  407218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:16:06.297201  407218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44999
	I0510 18:16:06.297734  407218 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:16:06.298367  407218 main.go:141] libmachine: Using API Version  1
	I0510 18:16:06.298396  407218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:16:06.298758  407218 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:16:06.298967  407218 main.go:141] libmachine: (functional-581506) Calling .DriverName
	I0510 18:16:06.333465  407218 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0510 18:16:06.334614  407218 start.go:304] selected driver: kvm2
	I0510 18:16:06.334628  407218 start.go:908] validating driver "kvm2" against &{Name:functional-581506 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.33.0 ClusterName:functional-581506 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8441 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 18:16:06.334724  407218 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 18:16:06.336620  407218 out.go:201] 
	W0510 18:16:06.337727  407218 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0510 18:16:06.338871  407218 out.go:201] 
	
	
	==> CRI-O <==
	May 10 18:18:56 functional-581506 crio[5891]: time="2025-05-10 18:18:56.351483229Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e321246f-8aa3-40bc-9a25-c2f17801d960 name=/runtime.v1.RuntimeService/Version
	May 10 18:18:56 functional-581506 crio[5891]: time="2025-05-10 18:18:56.351582052Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e321246f-8aa3-40bc-9a25-c2f17801d960 name=/runtime.v1.RuntimeService/Version
	May 10 18:18:56 functional-581506 crio[5891]: time="2025-05-10 18:18:56.353813642Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0ff6c9e1-7594-46d1-a0c2-746c03f21745 name=/runtime.v1.ImageService/ImageFsInfo
	May 10 18:18:56 functional-581506 crio[5891]: time="2025-05-10 18:18:56.354482026Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746901136354459223,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:190727,},InodesUsed:&UInt64Value{Value:98,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0ff6c9e1-7594-46d1-a0c2-746c03f21745 name=/runtime.v1.ImageService/ImageFsInfo
	May 10 18:18:56 functional-581506 crio[5891]: time="2025-05-10 18:18:56.355201031Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=49d70ac6-f23d-4997-99f2-c76b53421526 name=/runtime.v1.RuntimeService/ListContainers
	May 10 18:18:56 functional-581506 crio[5891]: time="2025-05-10 18:18:56.355294807Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=49d70ac6-f23d-4997-99f2-c76b53421526 name=/runtime.v1.RuntimeService/ListContainers
	May 10 18:18:56 functional-581506 crio[5891]: time="2025-05-10 18:18:56.355597388Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bc59d73bfa2bf2b1a39e797a7d2b573e644354a5079881f3dd26cec1c252aba,PodSandboxId:ac1fe88b05f85fd3070ec6db14c318d8b19cd922062770aa6fc6b88cf2bc0f14,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1746900274095849002,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-t4rcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1c5c10-5db3-43e0-935a-0549799273f3,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca40d958630336ad5282e3e644a344eb6222b09f601d44b816dbc17429e58924,PodSandboxId:0c91495cd04f27933de8b107c48b7ad6314a49c58ac7a22c6acb1832e85de258,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,State:CONTAINER_RUNNING,CreatedAt:1746900270663661287,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-581506,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 65ecf0b12922dcc8259f7d51baab7e18,},Annotations:map[string]string{io.kubernetes.container.hash: 2e2dc675,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206d421221f482411c4e5a5ef3f7102eccd8b38f07c242446855962f9958f985,PodSandboxId:a71305dd0a11cb4fec07b8ecece405b394e529aed22658f28110cc632eb39534,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1746900267571783633,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: ea7d9372-7c9e-444b-a628-0dfc4003f07d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bad506e8de60f9ec83d122523ca19234a72175234ffd3433d02684eb651ce9d,PodSandboxId:ad6bf190d55676203ab65df23981cd676ca08ed2bc2eef1dd05517d694c7e66e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_RUNNING,CreatedAt:1746900267584333315,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-581506,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: b2dc81ade1bbda73868f61223889f8f4,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1002d7979feaa7a0860a8934e8992ae4fdc369b64f2a34d3a93bf01f4e8015e3,PodSandboxId:1713a07d44b66f7d807e2bd691e25e7ecdd6e7c5d84c1261729e464047a1a031,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1746900267652578505,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-581506,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe62e874
0903c7a0badf385e7524512e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c908b9ef4e2dd4afa4d8c8077af1366569126a578de927d95d14f07813040bab,PodSandboxId:da31c3a5af7bf008afa7c113669c143c8daf56d21cd077d4cf6dc85664b412de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_RUNNING,CreatedAt:1746900267384004977,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sxk9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f4ab1a-93f7-4c1e-bcbe-5f9c9daaae46,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4eccacbeea6a58cc9c575f2c2bf5f8297029f9c9d2a9264bcf3e69644b4c28,PodSandboxId:e49ea2b58308c6c0b9b2908ae1ab6a5818f361d3a75849eac0ab8eb63fab41ca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_EXITED,CreatedAt:1746900143518984605,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sxk9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f4ab1a-93f7-4c1e-bcbe-5f9c9daaae46,},Annotations:map[string]string{io.kubernet
es.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ddf6914642a098d580c48db641460c4197df74a06bf7008e362f610f185934d,PodSandboxId:00c4138d2ab0d3a6880991ae6ca2f7c7e3c2de33b60a469043a91f7f8adef12d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1746900143498396555,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7d9372-7c9e-444b-a628-0dfc4003f07d,},Annotations:map[string]string{io.kubernetes.container.
hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67bef24b725ebf7a2b7f343d7456516d6b5de38118f9cf48e7d70d9146ce2087,PodSandboxId:e30af250008246b61b90a3718d1c328f2984559c29b8526e0386129454a98b4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1746900143526731809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-t4rcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1c5c10-5db3-43e0-935a-0549799273f3,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.con
tainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5879bea6c3a25517766471c3eec758ce0c6d853db7055e1f3505263a674ed969,PodSandboxId:2cc3ee9d3458fbdf619a3c176b445eff63eefe6d42ab071484b6ca448013de07,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1746900136904565424,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-581506,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe62e8740903c7a0badf385e7524512e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74fd0b7de642965eb7e03cf324017cb2195034685758e46efbd5e6997aba9ae5,PodSandboxId:45aa7f96fbe49dd74e9cdfcc97884ce5caba88b39b6e9b00f2357661ecbba1a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_EXITED,CreatedAt:1746900136908093042,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functiona
l-581506,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2dc81ade1bbda73868f61223889f8f4,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc42d63e6220a437de1d056d765ed97df2e6978798401b10283f61c7b1bc895b,PodSandboxId:6ed00def2c968d5a51634c7dafc6e6cc749b20e361a2365659842d41ca79ff9c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,State:CONTAINER_EXITED,CreatedAt:1746900136856417357,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-581506,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a886f34999ac0d6b56a638cab77f640,},Annotations:map[string]string{io.kubernetes.container.hash: fd54b99d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=49d70ac6-f23d-4997-99f2-c76b53421526 name=/runtime.v1.RuntimeService/ListContainers
	May 10 18:18:56 functional-581506 crio[5891]: time="2025-05-10 18:18:56.415979479Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6d05fc92-9555-455b-bc57-7bd848becf59 name=/runtime.v1.RuntimeService/Version
	May 10 18:18:56 functional-581506 crio[5891]: time="2025-05-10 18:18:56.416068384Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6d05fc92-9555-455b-bc57-7bd848becf59 name=/runtime.v1.RuntimeService/Version
	May 10 18:18:56 functional-581506 crio[5891]: time="2025-05-10 18:18:56.417649376Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=79727777-bc93-44ff-8343-5cf2d72e8cf3 name=/runtime.v1.ImageService/ImageFsInfo
	May 10 18:18:56 functional-581506 crio[5891]: time="2025-05-10 18:18:56.418296365Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746901136418274749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:190727,},InodesUsed:&UInt64Value{Value:98,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=79727777-bc93-44ff-8343-5cf2d72e8cf3 name=/runtime.v1.ImageService/ImageFsInfo
	May 10 18:18:56 functional-581506 crio[5891]: time="2025-05-10 18:18:56.419074469Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=051c35dd-12c1-44d9-b715-08f0b3303e9e name=/runtime.v1.RuntimeService/ListContainers
	May 10 18:18:56 functional-581506 crio[5891]: time="2025-05-10 18:18:56.419143592Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=051c35dd-12c1-44d9-b715-08f0b3303e9e name=/runtime.v1.RuntimeService/ListContainers
	May 10 18:18:56 functional-581506 crio[5891]: time="2025-05-10 18:18:56.419372925Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bc59d73bfa2bf2b1a39e797a7d2b573e644354a5079881f3dd26cec1c252aba,PodSandboxId:ac1fe88b05f85fd3070ec6db14c318d8b19cd922062770aa6fc6b88cf2bc0f14,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1746900274095849002,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-t4rcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1c5c10-5db3-43e0-935a-0549799273f3,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca40d958630336ad5282e3e644a344eb6222b09f601d44b816dbc17429e58924,PodSandboxId:0c91495cd04f27933de8b107c48b7ad6314a49c58ac7a22c6acb1832e85de258,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,State:CONTAINER_RUNNING,CreatedAt:1746900270663661287,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-581506,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 65ecf0b12922dcc8259f7d51baab7e18,},Annotations:map[string]string{io.kubernetes.container.hash: 2e2dc675,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206d421221f482411c4e5a5ef3f7102eccd8b38f07c242446855962f9958f985,PodSandboxId:a71305dd0a11cb4fec07b8ecece405b394e529aed22658f28110cc632eb39534,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1746900267571783633,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: ea7d9372-7c9e-444b-a628-0dfc4003f07d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bad506e8de60f9ec83d122523ca19234a72175234ffd3433d02684eb651ce9d,PodSandboxId:ad6bf190d55676203ab65df23981cd676ca08ed2bc2eef1dd05517d694c7e66e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_RUNNING,CreatedAt:1746900267584333315,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-581506,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: b2dc81ade1bbda73868f61223889f8f4,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1002d7979feaa7a0860a8934e8992ae4fdc369b64f2a34d3a93bf01f4e8015e3,PodSandboxId:1713a07d44b66f7d807e2bd691e25e7ecdd6e7c5d84c1261729e464047a1a031,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1746900267652578505,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-581506,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe62e874
0903c7a0badf385e7524512e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c908b9ef4e2dd4afa4d8c8077af1366569126a578de927d95d14f07813040bab,PodSandboxId:da31c3a5af7bf008afa7c113669c143c8daf56d21cd077d4cf6dc85664b412de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_RUNNING,CreatedAt:1746900267384004977,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sxk9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f4ab1a-93f7-4c1e-bcbe-5f9c9daaae46,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4eccacbeea6a58cc9c575f2c2bf5f8297029f9c9d2a9264bcf3e69644b4c28,PodSandboxId:e49ea2b58308c6c0b9b2908ae1ab6a5818f361d3a75849eac0ab8eb63fab41ca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_EXITED,CreatedAt:1746900143518984605,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sxk9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f4ab1a-93f7-4c1e-bcbe-5f9c9daaae46,},Annotations:map[string]string{io.kubernet
es.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ddf6914642a098d580c48db641460c4197df74a06bf7008e362f610f185934d,PodSandboxId:00c4138d2ab0d3a6880991ae6ca2f7c7e3c2de33b60a469043a91f7f8adef12d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1746900143498396555,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7d9372-7c9e-444b-a628-0dfc4003f07d,},Annotations:map[string]string{io.kubernetes.container.
hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67bef24b725ebf7a2b7f343d7456516d6b5de38118f9cf48e7d70d9146ce2087,PodSandboxId:e30af250008246b61b90a3718d1c328f2984559c29b8526e0386129454a98b4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1746900143526731809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-t4rcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1c5c10-5db3-43e0-935a-0549799273f3,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.con
tainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5879bea6c3a25517766471c3eec758ce0c6d853db7055e1f3505263a674ed969,PodSandboxId:2cc3ee9d3458fbdf619a3c176b445eff63eefe6d42ab071484b6ca448013de07,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1746900136904565424,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-581506,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe62e8740903c7a0badf385e7524512e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74fd0b7de642965eb7e03cf324017cb2195034685758e46efbd5e6997aba9ae5,PodSandboxId:45aa7f96fbe49dd74e9cdfcc97884ce5caba88b39b6e9b00f2357661ecbba1a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_EXITED,CreatedAt:1746900136908093042,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functiona
l-581506,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2dc81ade1bbda73868f61223889f8f4,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc42d63e6220a437de1d056d765ed97df2e6978798401b10283f61c7b1bc895b,PodSandboxId:6ed00def2c968d5a51634c7dafc6e6cc749b20e361a2365659842d41ca79ff9c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,State:CONTAINER_EXITED,CreatedAt:1746900136856417357,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-581506,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a886f34999ac0d6b56a638cab77f640,},Annotations:map[string]string{io.kubernetes.container.hash: fd54b99d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=051c35dd-12c1-44d9-b715-08f0b3303e9e name=/runtime.v1.RuntimeService/ListContainers
	May 10 18:18:56 functional-581506 crio[5891]: time="2025-05-10 18:18:56.425451373Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=368e81f9-ef91-44ae-94cf-6ecf24ba0b0a name=/runtime.v1.ImageService/ImageFsInfo
	May 10 18:18:56 functional-581506 crio[5891]: time="2025-05-10 18:18:56.426114107Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746901136426090607,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:190727,},InodesUsed:&UInt64Value{Value:98,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=368e81f9-ef91-44ae-94cf-6ecf24ba0b0a name=/runtime.v1.ImageService/ImageFsInfo
	May 10 18:18:56 functional-581506 crio[5891]: time="2025-05-10 18:18:56.427013213Z" level=debug msg="Request: &ListImagesRequest{Filter:&ImageFilter{Image:&ImageSpec{Image:,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},},}" file="otel-collector/interceptors.go:62" id=2b5bef76-8455-4d54-b7d5-817e61ed8512 name=/runtime.v1.ImageService/ListImages
	May 10 18:18:56 functional-581506 crio[5891]: time="2025-05-10 18:18:56.427595067Z" level=debug msg="Response: &ListImagesResponse{Images:[]*Image{&Image{Id:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,RepoTags:[registry.k8s.io/kube-apiserver:v1.33.0],RepoDigests:[registry.k8s.io/kube-apiserver@sha256:6679a9970a8b2f18647b33bf02e5e9895d286689256e2f7172481b4096e46a32 registry.k8s.io/kube-apiserver@sha256:6c0f4ade3e5a34d8791a48671b127a00dc114e84b70ec4d92e586c17d68a1ca6],Size_:102858210,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,RepoTags:[registry.k8s.io/kube-controller-manager:v1.33.0],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:9de627a31852175b8308cb7c8d92f15365672f6bf26026719cc1c05a03580bc4 registry.k8s.io/kube-controller-manager@sha256:f0b32ab11fd06504608cdb9084f7284106b4f5f07f35eb8823e70ea0eaaf252a],Size_:95653192,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},
&Image{Id:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,RepoTags:[registry.k8s.io/kube-scheduler:v1.33.0],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:8dd2fbeb7f711da53a89ded239e54133f34110d98de887a39a9021e651b51f1f registry.k8s.io/kube-scheduler@sha256:b375b81c7f253be3f093232650b153288e7f90be3d02a025fd602b4b40fd95c5],Size_:74501448,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,RepoTags:[registry.k8s.io/kube-proxy:v1.33.0],RepoDigests:[registry.k8s.io/kube-proxy@sha256:05f8984642d05b1b1a6c37605a4a566e46e7290f9291d17885f096c36861095b registry.k8s.io/kube-proxy@sha256:32b893c37d363b18711b397f6ccb29655e3d08183d410f1a93ad298992c9ea7e],Size_:99145113,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,RepoTags:[registry.k8s.io/pause:3.10],RepoDigests:[registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd3097
54d4a registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a],Size_:742080,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,Pinned:true,},&Image{Id:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,RepoTags:[registry.k8s.io/etcd:3.5.21-0],RepoDigests:[registry.k8s.io/etcd@sha256:21d2177d708b53ac0fbd1c073c334d58f913eb75da293ff086610e61af03630a registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121],Size_:154190592,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,RepoTags:[registry.k8s.io/coredns/coredns:v1.12.0],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:2324f485c8db937628a18c293d946327f3a7229b9f77213e8f2256f0b616a4ee registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97],Size_:71169915,Uid:nil,Username:nonroot,Spec:nil,Pinned:false,},&Image{Id:6e38f40d628db3002f5617342c8872c93
5de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:df3849d954c98a7162c7bee7313ece357606e313d98ebd68b7aac5e961b1156f,RepoTags:[docker.io/kindest/kindnetd:v20250214-acbabc1a],RepoDigests:[docker.io/kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495 docker.io/kindest/kindnetd@sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97],Size_:95703604,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e,RepoTags:[registry.k8s.io/pause:3.1],RepoDigests:[registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e],Size_:746911,Uid:nil,U
sername:,Spec:nil,Pinned:false,},&Image{Id:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da,RepoTags:[registry.k8s.io/pause:3.3],RepoDigests:[registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04],Size_:686139,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:0e0d4ec4d5d14ca69921cfd1b2093d0d8e26cefa42b9a37b63fe0c1391dd3bc2,RepoTags:[localhost/minikube-local-cache-test:functional-581506],RepoDigests:[localhost/minikube-local-cache-test@sha256:723a2f921ac79321b709c6f8bced4af5feea37a40b1c2497830de10a50fb2c88],Size_:3330,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06,RepoTags:[registry.k8s.io/pause:latest],RepoDigests:[registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9],Size_:247077,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,RepoTags:[localhost/kicbase/echo-server:function
al-581506],RepoDigests:[localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf],Size_:4943877,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a,RepoTags:[gcr.io/k8s-minikube/busybox:latest],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b],Size_:1462480,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:b6613321534d660b014e9996d9e197cb4d152bf271ba45d684d48d781477858c,RepoTags:[],RepoDigests:[docker.io/library/501a7b9d5d3b20ddfdadaabb6821805a264db49dec8a499b393fb3582f33e766-tmp@sha256:c626c86f9b49102678074df4ffaa11d74d81298198ec84ee791b5b66413bf3b1],Size_:1466018,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:1be27181eeb41655ff50498e113b43cb443e525b2acc1a174b8902e29215277a,RepoTags:[localhost/my-image:functional-581506],RepoDigests
:[localhost/my-image@sha256:0a0a65e09914176b1fb4e8db2ebba96f3a7a8bd37b63e0dda3d1fe362ff8b041],Size_:1468600,Uid:nil,Username:,Spec:nil,Pinned:false,},},}" file="otel-collector/interceptors.go:74" id=2b5bef76-8455-4d54-b7d5-817e61ed8512 name=/runtime.v1.ImageService/ListImages
	May 10 18:18:56 functional-581506 crio[5891]: time="2025-05-10 18:18:56.479507256Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=487effbd-e1a2-4479-ac0f-83daee16b099 name=/runtime.v1.RuntimeService/Version
	May 10 18:18:56 functional-581506 crio[5891]: time="2025-05-10 18:18:56.479597004Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=487effbd-e1a2-4479-ac0f-83daee16b099 name=/runtime.v1.RuntimeService/Version
	May 10 18:18:56 functional-581506 crio[5891]: time="2025-05-10 18:18:56.480695134Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2a1d400c-09f2-48e5-9306-4a6efd3d71b6 name=/runtime.v1.ImageService/ImageFsInfo
	May 10 18:18:56 functional-581506 crio[5891]: time="2025-05-10 18:18:56.481323884Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746901136481300759,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:190727,},InodesUsed:&UInt64Value{Value:98,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2a1d400c-09f2-48e5-9306-4a6efd3d71b6 name=/runtime.v1.ImageService/ImageFsInfo
	May 10 18:18:56 functional-581506 crio[5891]: time="2025-05-10 18:18:56.482085997Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=928073cb-03e8-4c87-8048-6c490eae9961 name=/runtime.v1.RuntimeService/ListContainers
	May 10 18:18:56 functional-581506 crio[5891]: time="2025-05-10 18:18:56.482140586Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=928073cb-03e8-4c87-8048-6c490eae9961 name=/runtime.v1.RuntimeService/ListContainers
	May 10 18:18:56 functional-581506 crio[5891]: time="2025-05-10 18:18:56.482412768Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bc59d73bfa2bf2b1a39e797a7d2b573e644354a5079881f3dd26cec1c252aba,PodSandboxId:ac1fe88b05f85fd3070ec6db14c318d8b19cd922062770aa6fc6b88cf2bc0f14,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1746900274095849002,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-t4rcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1c5c10-5db3-43e0-935a-0549799273f3,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca40d958630336ad5282e3e644a344eb6222b09f601d44b816dbc17429e58924,PodSandboxId:0c91495cd04f27933de8b107c48b7ad6314a49c58ac7a22c6acb1832e85de258,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,State:CONTAINER_RUNNING,CreatedAt:1746900270663661287,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-581506,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 65ecf0b12922dcc8259f7d51baab7e18,},Annotations:map[string]string{io.kubernetes.container.hash: 2e2dc675,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206d421221f482411c4e5a5ef3f7102eccd8b38f07c242446855962f9958f985,PodSandboxId:a71305dd0a11cb4fec07b8ecece405b394e529aed22658f28110cc632eb39534,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1746900267571783633,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: ea7d9372-7c9e-444b-a628-0dfc4003f07d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bad506e8de60f9ec83d122523ca19234a72175234ffd3433d02684eb651ce9d,PodSandboxId:ad6bf190d55676203ab65df23981cd676ca08ed2bc2eef1dd05517d694c7e66e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_RUNNING,CreatedAt:1746900267584333315,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-581506,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: b2dc81ade1bbda73868f61223889f8f4,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1002d7979feaa7a0860a8934e8992ae4fdc369b64f2a34d3a93bf01f4e8015e3,PodSandboxId:1713a07d44b66f7d807e2bd691e25e7ecdd6e7c5d84c1261729e464047a1a031,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1746900267652578505,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-581506,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe62e874
0903c7a0badf385e7524512e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c908b9ef4e2dd4afa4d8c8077af1366569126a578de927d95d14f07813040bab,PodSandboxId:da31c3a5af7bf008afa7c113669c143c8daf56d21cd077d4cf6dc85664b412de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_RUNNING,CreatedAt:1746900267384004977,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sxk9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f4ab1a-93f7-4c1e-bcbe-5f9c9daaae46,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4eccacbeea6a58cc9c575f2c2bf5f8297029f9c9d2a9264bcf3e69644b4c28,PodSandboxId:e49ea2b58308c6c0b9b2908ae1ab6a5818f361d3a75849eac0ab8eb63fab41ca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_EXITED,CreatedAt:1746900143518984605,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sxk9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f4ab1a-93f7-4c1e-bcbe-5f9c9daaae46,},Annotations:map[string]string{io.kubernet
es.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ddf6914642a098d580c48db641460c4197df74a06bf7008e362f610f185934d,PodSandboxId:00c4138d2ab0d3a6880991ae6ca2f7c7e3c2de33b60a469043a91f7f8adef12d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1746900143498396555,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7d9372-7c9e-444b-a628-0dfc4003f07d,},Annotations:map[string]string{io.kubernetes.container.
hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67bef24b725ebf7a2b7f343d7456516d6b5de38118f9cf48e7d70d9146ce2087,PodSandboxId:e30af250008246b61b90a3718d1c328f2984559c29b8526e0386129454a98b4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1746900143526731809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-t4rcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1c5c10-5db3-43e0-935a-0549799273f3,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.con
tainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5879bea6c3a25517766471c3eec758ce0c6d853db7055e1f3505263a674ed969,PodSandboxId:2cc3ee9d3458fbdf619a3c176b445eff63eefe6d42ab071484b6ca448013de07,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1746900136904565424,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-581506,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe62e8740903c7a0badf385e7524512e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74fd0b7de642965eb7e03cf324017cb2195034685758e46efbd5e6997aba9ae5,PodSandboxId:45aa7f96fbe49dd74e9cdfcc97884ce5caba88b39b6e9b00f2357661ecbba1a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_EXITED,CreatedAt:1746900136908093042,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functiona
l-581506,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2dc81ade1bbda73868f61223889f8f4,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc42d63e6220a437de1d056d765ed97df2e6978798401b10283f61c7b1bc895b,PodSandboxId:6ed00def2c968d5a51634c7dafc6e6cc749b20e361a2365659842d41ca79ff9c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,State:CONTAINER_EXITED,CreatedAt:1746900136856417357,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-581506,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a886f34999ac0d6b56a638cab77f640,},Annotations:map[string]string{io.kubernetes.container.hash: fd54b99d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=928073cb-03e8-4c87-8048-6c490eae9961 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2bc59d73bfa2b       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b   14 minutes ago      Running             coredns                   2                   ac1fe88b05f85       coredns-674b8bbfcf-t4rcv
	ca40d95863033       6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4   14 minutes ago      Running             kube-apiserver            0                   0c91495cd04f2       kube-apiserver-functional-581506
	1002d7979feaa       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1   14 minutes ago      Running             etcd                      2                   1713a07d44b66       etcd-functional-581506
	5bad506e8de60       1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02   14 minutes ago      Running             kube-controller-manager   2                   ad6bf190d5567       kube-controller-manager-functional-581506
	206d421221f48       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       4                   a71305dd0a11c       storage-provisioner
	c908b9ef4e2dd       f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68   14 minutes ago      Running             kube-proxy                2                   da31c3a5af7bf       kube-proxy-sxk9c
	67bef24b725eb       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b   16 minutes ago      Exited              coredns                   1                   e30af25000824       coredns-674b8bbfcf-t4rcv
	2b4eccacbeea6       f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68   16 minutes ago      Exited              kube-proxy                1                   e49ea2b58308c       kube-proxy-sxk9c
	9ddf6914642a0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Exited              storage-provisioner       3                   00c4138d2ab0d       storage-provisioner
	74fd0b7de6429       1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02   16 minutes ago      Exited              kube-controller-manager   1                   45aa7f96fbe49       kube-controller-manager-functional-581506
	5879bea6c3a25       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1   16 minutes ago      Exited              etcd                      1                   2cc3ee9d3458f       etcd-functional-581506
	bc42d63e6220a       8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4   16 minutes ago      Exited              kube-scheduler            1                   6ed00def2c968       kube-scheduler-functional-581506
	
	
	==> coredns [2bc59d73bfa2bf2b1a39e797a7d2b573e644354a5079881f3dd26cec1c252aba] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] 127.0.0.1:47540 - 49812 "HINFO IN 3817603910003911590.6949861336943334396. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.032959679s
	
	
	==> coredns [67bef24b725ebf7a2b7f343d7456516d6b5de38118f9cf48e7d70d9146ce2087] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] 127.0.0.1:59748 - 21482 "HINFO IN 2761340015405739266.7136990693185190550. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022015892s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-581506
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-581506
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e96c83983357cd8557f3cdfe077a25cc73d485a4
	                    minikube.k8s.io/name=functional-581506
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_05_10T18_01_15_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 May 2025 18:01:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-581506
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 May 2025 18:18:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 May 2025 18:14:13 +0000   Sat, 10 May 2025 18:01:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 May 2025 18:14:13 +0000   Sat, 10 May 2025 18:01:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 May 2025 18:14:13 +0000   Sat, 10 May 2025 18:01:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 May 2025 18:14:13 +0000   Sat, 10 May 2025 18:01:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.52
	  Hostname:    functional-581506
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912748Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912748Ki
	  pods:               110
	System Info:
	  Machine ID:                 78012ce40601437bb4c2db7efb9be33a
	  System UUID:                78012ce4-0601-437b-b4c2-db7efb9be33a
	  Boot ID:                    832a94bf-8db0-4adf-aef4-977728fcc1b7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2024.11.2
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.33.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-674b8bbfcf-t4rcv                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     17m
	  kube-system                 etcd-functional-581506                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         17m
	  kube-system                 kube-apiserver-functional-581506             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-functional-581506    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-sxk9c                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-functional-581506             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17m                kube-proxy       
	  Normal  Starting                 14m                kube-proxy       
	  Normal  Starting                 16m                kube-proxy       
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17m                kubelet          Node functional-581506 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m                kubelet          Node functional-581506 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m                kubelet          Node functional-581506 status is now: NodeHasSufficientPID
	  Normal  NodeReady                17m                kubelet          Node functional-581506 status is now: NodeReady
	  Normal  RegisteredNode           17m                node-controller  Node functional-581506 event: Registered Node functional-581506 in Controller
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node functional-581506 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node functional-581506 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node functional-581506 status is now: NodeHasSufficientPID
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           16m                node-controller  Node functional-581506 event: Registered Node functional-581506 in Controller
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node functional-581506 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node functional-581506 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node functional-581506 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node functional-581506 event: Registered Node functional-581506 in Controller
	
	
	==> dmesg <==
	[May10 18:00] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.000002] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.001507] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000567] (rpcbind)[143]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.143993] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000004] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.090211] kauditd_printk_skb: 1 callbacks suppressed
	[May10 18:01] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.148940] kauditd_printk_skb: 67 callbacks suppressed
	[  +0.675797] kauditd_printk_skb: 19 callbacks suppressed
	[ +10.795650] kauditd_printk_skb: 76 callbacks suppressed
	[ +20.864703] kauditd_printk_skb: 22 callbacks suppressed
	[May10 18:02] kauditd_printk_skb: 34 callbacks suppressed
	[  +4.648687] kauditd_printk_skb: 132 callbacks suppressed
	[  +5.789009] kauditd_printk_skb: 9 callbacks suppressed
	[ +13.341647] kauditd_printk_skb: 12 callbacks suppressed
	[May10 18:04] kauditd_printk_skb: 90 callbacks suppressed
	[  +1.054815] kauditd_printk_skb: 130 callbacks suppressed
	[  +0.904906] kauditd_printk_skb: 16 callbacks suppressed
	[May10 18:08] kauditd_printk_skb: 22 callbacks suppressed
	
	
	==> etcd [1002d7979feaa7a0860a8934e8992ae4fdc369b64f2a34d3a93bf01f4e8015e3] <==
	{"level":"info","ts":"2025-05-10T18:04:30.583609Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-05-10T18:04:30.584312Z","caller":"embed/etcd.go:762","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-05-10T18:04:30.584593Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"3baf479dc31b93a9","initial-advertise-peer-urls":["https://192.168.39.52:2380"],"listen-peer-urls":["https://192.168.39.52:2380"],"advertise-client-urls":["https://192.168.39.52:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.52:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-05-10T18:04:30.584641Z","caller":"embed/etcd.go:908","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-05-10T18:04:30.584794Z","caller":"embed/etcd.go:633","msg":"serving peer traffic","address":"192.168.39.52:2380"}
	{"level":"info","ts":"2025-05-10T18:04:30.584823Z","caller":"embed/etcd.go:603","msg":"cmux::serve","address":"192.168.39.52:2380"}
	{"level":"info","ts":"2025-05-10T18:04:31.534989Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 is starting a new election at term 3"}
	{"level":"info","ts":"2025-05-10T18:04:31.535049Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 became pre-candidate at term 3"}
	{"level":"info","ts":"2025-05-10T18:04:31.535080Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 received MsgPreVoteResp from 3baf479dc31b93a9 at term 3"}
	{"level":"info","ts":"2025-05-10T18:04:31.535099Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 became candidate at term 4"}
	{"level":"info","ts":"2025-05-10T18:04:31.535152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 received MsgVoteResp from 3baf479dc31b93a9 at term 4"}
	{"level":"info","ts":"2025-05-10T18:04:31.535163Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 became leader at term 4"}
	{"level":"info","ts":"2025-05-10T18:04:31.535174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3baf479dc31b93a9 elected leader 3baf479dc31b93a9 at term 4"}
	{"level":"info","ts":"2025-05-10T18:04:31.541728Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"3baf479dc31b93a9","local-member-attributes":"{Name:functional-581506 ClientURLs:[https://192.168.39.52:2379]}","request-path":"/0/members/3baf479dc31b93a9/attributes","cluster-id":"26c9414d925de00c","publish-timeout":"7s"}
	{"level":"info","ts":"2025-05-10T18:04:31.541955Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T18:04:31.542039Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T18:04:31.542717Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T18:04:31.544923Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-05-10T18:04:31.544977Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-05-10T18:04:31.545343Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T18:04:31.545926Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-05-10T18:04:31.550131Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.52:2379"}
	{"level":"info","ts":"2025-05-10T18:14:31.638045Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":969}
	{"level":"info","ts":"2025-05-10T18:14:31.648248Z","caller":"mvcc/kvstore_compaction.go:71","msg":"finished scheduled compaction","compact-revision":969,"took":"9.559967ms","hash":1876746662,"current-db-size-bytes":3153920,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":3153920,"current-db-size-in-use":"3.2 MB"}
	{"level":"info","ts":"2025-05-10T18:14:31.648346Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1876746662,"revision":969,"compact-revision":-1}
	
	
	==> etcd [5879bea6c3a25517766471c3eec758ce0c6d853db7055e1f3505263a674ed969] <==
	{"level":"info","ts":"2025-05-10T18:02:21.030061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-05-10T18:02:21.030106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 received MsgPreVoteResp from 3baf479dc31b93a9 at term 2"}
	{"level":"info","ts":"2025-05-10T18:02:21.030146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 became candidate at term 3"}
	{"level":"info","ts":"2025-05-10T18:02:21.030207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 received MsgVoteResp from 3baf479dc31b93a9 at term 3"}
	{"level":"info","ts":"2025-05-10T18:02:21.030228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 became leader at term 3"}
	{"level":"info","ts":"2025-05-10T18:02:21.030247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3baf479dc31b93a9 elected leader 3baf479dc31b93a9 at term 3"}
	{"level":"info","ts":"2025-05-10T18:02:21.038152Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"3baf479dc31b93a9","local-member-attributes":"{Name:functional-581506 ClientURLs:[https://192.168.39.52:2379]}","request-path":"/0/members/3baf479dc31b93a9/attributes","cluster-id":"26c9414d925de00c","publish-timeout":"7s"}
	{"level":"info","ts":"2025-05-10T18:02:21.038369Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T18:02:21.041197Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T18:02:21.041717Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T18:02:21.044437Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-05-10T18:02:21.044826Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T18:02:21.052743Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.52:2379"}
	{"level":"info","ts":"2025-05-10T18:02:21.064014Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-05-10T18:02:21.065946Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-05-10T18:02:47.594480Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-05-10T18:02:47.594538Z","caller":"embed/etcd.go:408","msg":"closing etcd server","name":"functional-581506","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.52:2380"],"advertise-client-urls":["https://192.168.39.52:2379"]}
	{"level":"warn","ts":"2025-05-10T18:02:47.692253Z","caller":"embed/serve.go:235","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.52:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-05-10T18:02:47.692414Z","caller":"embed/serve.go:237","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.52:2379: use of closed network connection"}
	{"level":"info","ts":"2025-05-10T18:02:47.692332Z","caller":"etcdserver/server.go:1546","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"3baf479dc31b93a9","current-leader-member-id":"3baf479dc31b93a9"}
	{"level":"warn","ts":"2025-05-10T18:02:47.692493Z","caller":"embed/serve.go:235","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-05-10T18:02:47.692590Z","caller":"embed/serve.go:237","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-05-10T18:02:47.696041Z","caller":"embed/etcd.go:613","msg":"stopping serving peer traffic","address":"192.168.39.52:2380"}
	{"level":"info","ts":"2025-05-10T18:02:47.696298Z","caller":"embed/etcd.go:618","msg":"stopped serving peer traffic","address":"192.168.39.52:2380"}
	{"level":"info","ts":"2025-05-10T18:02:47.696390Z","caller":"embed/etcd.go:410","msg":"closed etcd server","name":"functional-581506","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.52:2380"],"advertise-client-urls":["https://192.168.39.52:2379"]}
	
	
	==> kernel <==
	 18:18:56 up 18 min,  0 user,  load average: 0.64, 0.24, 0.14
	Linux functional-581506 5.10.207 #1 SMP Fri May 9 03:49:24 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2024.11.2"
	
	
	==> kube-apiserver [ca40d958630336ad5282e3e644a344eb6222b09f601d44b816dbc17429e58924] <==
	I0510 18:04:32.992976       1 shared_informer.go:357] "Caches are synced" controller="node_authorizer"
	I0510 18:04:33.822508       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0510 18:04:33.885749       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0510 18:04:35.065207       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0510 18:04:35.106821       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0510 18:04:35.136969       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0510 18:04:35.144671       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0510 18:04:36.229591       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0510 18:04:36.517419       1 controller.go:667] quota admission added evaluator for: endpoints
	I0510 18:04:36.581373       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 18:04:36.669214       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0510 18:08:43.916202       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 18:08:43.922038       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.104.141.104"}
	I0510 18:08:47.337692       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 18:08:48.351686       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.104.34.72"}
	I0510 18:08:48.356985       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 18:08:49.106605       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 18:08:49.110056       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.108.45.176"}
	I0510 18:08:54.502666       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 18:08:54.508618       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.228.11"}
	I0510 18:14:32.894709       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 18:16:07.436962       1 controller.go:667] quota admission added evaluator for: namespaces
	I0510 18:16:07.748671       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.114.28"}
	I0510 18:16:07.755311       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 18:16:07.788681       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.60.139"}
	
	
	==> kube-controller-manager [5bad506e8de60f9ec83d122523ca19234a72175234ffd3433d02684eb651ce9d] <==
	I0510 18:04:36.183471       1 shared_informer.go:357] "Caches are synced" controller="PVC protection"
	I0510 18:04:36.186114       1 shared_informer.go:357] "Caches are synced" controller="GC"
	I0510 18:04:36.188512       1 shared_informer.go:357] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0510 18:04:36.213570       1 shared_informer.go:357] "Caches are synced" controller="ephemeral"
	I0510 18:04:36.215649       1 shared_informer.go:357] "Caches are synced" controller="endpoint_slice_mirroring"
	I0510 18:04:36.228498       1 shared_informer.go:357] "Caches are synced" controller="ReplicaSet"
	I0510 18:04:36.233748       1 shared_informer.go:357] "Caches are synced" controller="stateful set"
	I0510 18:04:36.311659       1 shared_informer.go:357] "Caches are synced" controller="daemon sets"
	I0510 18:04:36.312725       1 shared_informer.go:357] "Caches are synced" controller="attach detach"
	I0510 18:04:36.380908       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 18:04:36.428129       1 shared_informer.go:357] "Caches are synced" controller="service account"
	I0510 18:04:36.472582       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 18:04:36.518918       1 shared_informer.go:357] "Caches are synced" controller="namespace"
	I0510 18:04:36.901758       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0510 18:04:36.901798       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0510 18:04:36.901804       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0510 18:04:36.904012       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	E0510 18:16:07.562224       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 18:16:07.570114       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 18:16:07.576704       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 18:16:07.584223       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 18:16:07.591210       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 18:16:07.595649       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 18:16:07.607783       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 18:16:07.608070       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [74fd0b7de642965eb7e03cf324017cb2195034685758e46efbd5e6997aba9ae5] <==
	I0510 18:02:26.095808       1 shared_informer.go:357] "Caches are synced" controller="service account"
	I0510 18:02:26.107982       1 shared_informer.go:357] "Caches are synced" controller="endpoint_slice"
	I0510 18:02:26.113575       1 shared_informer.go:357] "Caches are synced" controller="namespace"
	I0510 18:02:26.114829       1 shared_informer.go:357] "Caches are synced" controller="ReplicaSet"
	I0510 18:02:26.118078       1 shared_informer.go:357] "Caches are synced" controller="cronjob"
	I0510 18:02:26.121663       1 shared_informer.go:357] "Caches are synced" controller="daemon sets"
	I0510 18:02:26.127470       1 shared_informer.go:357] "Caches are synced" controller="deployment"
	I0510 18:02:26.128559       1 shared_informer.go:357] "Caches are synced" controller="job"
	I0510 18:02:26.135167       1 shared_informer.go:357] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0510 18:02:26.140926       1 shared_informer.go:357] "Caches are synced" controller="taint-eviction-controller"
	I0510 18:02:26.149406       1 shared_informer.go:357] "Caches are synced" controller="ClusterRoleAggregator"
	I0510 18:02:26.194556       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrapproving"
	I0510 18:02:26.234159       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0510 18:02:26.234341       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0510 18:02:26.234405       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0510 18:02:26.234431       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0510 18:02:26.248426       1 shared_informer.go:357] "Caches are synced" controller="HPA"
	I0510 18:02:26.262947       1 shared_informer.go:357] "Caches are synced" controller="persistent volume"
	I0510 18:02:26.312142       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 18:02:26.393774       1 shared_informer.go:357] "Caches are synced" controller="attach detach"
	I0510 18:02:26.402315       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 18:02:26.844411       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0510 18:02:26.844453       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0510 18:02:26.844461       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0510 18:02:26.854122       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [2b4eccacbeea6a58cc9c575f2c2bf5f8297029f9c9d2a9264bcf3e69644b4c28] <==
	E0510 18:02:23.839411       1 proxier.go:732] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0510 18:02:23.859564       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.39.52"]
	E0510 18:02:23.859640       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0510 18:02:23.913819       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0510 18:02:23.913976       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0510 18:02:23.914004       1 server_linux.go:145] "Using iptables Proxier"
	I0510 18:02:23.928588       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0510 18:02:23.928908       1 server.go:516] "Version info" version="v1.33.0"
	I0510 18:02:23.928939       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 18:02:23.939076       1 config.go:199] "Starting service config controller"
	I0510 18:02:23.939113       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0510 18:02:23.939140       1 config.go:105] "Starting endpoint slice config controller"
	I0510 18:02:23.939144       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0510 18:02:23.939155       1 config.go:440] "Starting serviceCIDR config controller"
	I0510 18:02:23.939158       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0510 18:02:23.939762       1 config.go:329] "Starting node config controller"
	I0510 18:02:23.939818       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0510 18:02:24.039379       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	I0510 18:02:24.039423       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0510 18:02:24.039626       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0510 18:02:24.040552       1 shared_informer.go:357] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [c908b9ef4e2dd4afa4d8c8077af1366569126a578de927d95d14f07813040bab] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0510 18:04:28.043194       1 server.go:704] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-581506\": dial tcp 192.168.39.52:8441: connect: connection refused"
	E0510 18:04:29.208643       1 server.go:704] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-581506\": dial tcp 192.168.39.52:8441: connect: connection refused"
	I0510 18:04:32.936969       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.39.52"]
	E0510 18:04:32.937354       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0510 18:04:33.046754       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0510 18:04:33.046916       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0510 18:04:33.046981       1 server_linux.go:145] "Using iptables Proxier"
	I0510 18:04:33.060194       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0510 18:04:33.060559       1 server.go:516] "Version info" version="v1.33.0"
	I0510 18:04:33.060762       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 18:04:33.065256       1 config.go:199] "Starting service config controller"
	I0510 18:04:33.068415       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0510 18:04:33.068567       1 config.go:105] "Starting endpoint slice config controller"
	I0510 18:04:33.068590       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0510 18:04:33.068693       1 config.go:440] "Starting serviceCIDR config controller"
	I0510 18:04:33.073303       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0510 18:04:33.073374       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	I0510 18:04:33.068977       1 config.go:329] "Starting node config controller"
	I0510 18:04:33.073428       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0510 18:04:33.169005       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0510 18:04:33.169120       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0510 18:04:33.173643       1 shared_informer.go:357] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [bc42d63e6220a437de1d056d765ed97df2e6978798401b10283f61c7b1bc895b] <==
	I0510 18:02:21.278714       1 serving.go:386] Generated self-signed cert in-memory
	W0510 18:02:22.765853       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0510 18:02:22.766075       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0510 18:02:22.766103       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0510 18:02:22.766193       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0510 18:02:22.804654       1 server.go:171] "Starting Kubernetes Scheduler" version="v1.33.0"
	I0510 18:02:22.804770       1 server.go:173] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 18:02:22.806849       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0510 18:02:22.807232       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 18:02:22.807327       1 shared_informer.go:350] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 18:02:22.807360       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0510 18:02:22.907841       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0510 18:02:47.604715       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 10 18:18:20 functional-581506 kubelet[6715]: E0510 18:18:20.179234    6715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746901100178335289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:165066,},InodesUsed:&UInt64Value{Value:82,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:18:27 functional-581506 kubelet[6715]: E0510 18:18:27.838660    6715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-581506_kube-system_0a886f34999ac0d6b56a638cab77f640_2\" already exists"
	May 10 18:18:27 functional-581506 kubelet[6715]: E0510 18:18:27.838736    6715 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-581506_kube-system_0a886f34999ac0d6b56a638cab77f640_2\" already exists" pod="kube-system/kube-scheduler-functional-581506"
	May 10 18:18:27 functional-581506 kubelet[6715]: E0510 18:18:27.838755    6715 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-581506_kube-system_0a886f34999ac0d6b56a638cab77f640_2\" already exists" pod="kube-system/kube-scheduler-functional-581506"
	May 10 18:18:27 functional-581506 kubelet[6715]: E0510 18:18:27.838799    6715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-functional-581506_kube-system(0a886f34999ac0d6b56a638cab77f640)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-functional-581506_kube-system(0a886f34999ac0d6b56a638cab77f640)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-functional-581506_kube-system_0a886f34999ac0d6b56a638cab77f640_2\\\" already exists\"" pod="kube-system/kube-scheduler-functional-581506" podUID="0a886f34999ac0d6b56a638cab77f640"
	May 10 18:18:29 functional-581506 kubelet[6715]: E0510 18:18:29.942642    6715 manager.go:1116] Failed to create existing container: /kubepods/besteffort/podc3f4ab1a-93f7-4c1e-bcbe-5f9c9daaae46/crio-e49ea2b58308c6c0b9b2908ae1ab6a5818f361d3a75849eac0ab8eb63fab41ca: Error finding container e49ea2b58308c6c0b9b2908ae1ab6a5818f361d3a75849eac0ab8eb63fab41ca: Status 404 returned error can't find the container with id e49ea2b58308c6c0b9b2908ae1ab6a5818f361d3a75849eac0ab8eb63fab41ca
	May 10 18:18:29 functional-581506 kubelet[6715]: E0510 18:18:29.943232    6715 manager.go:1116] Failed to create existing container: /kubepods/burstable/podfe62e8740903c7a0badf385e7524512e/crio-2cc3ee9d3458fbdf619a3c176b445eff63eefe6d42ab071484b6ca448013de07: Error finding container 2cc3ee9d3458fbdf619a3c176b445eff63eefe6d42ab071484b6ca448013de07: Status 404 returned error can't find the container with id 2cc3ee9d3458fbdf619a3c176b445eff63eefe6d42ab071484b6ca448013de07
	May 10 18:18:29 functional-581506 kubelet[6715]: E0510 18:18:29.943662    6715 manager.go:1116] Failed to create existing container: /kubepods/besteffort/podea7d9372-7c9e-444b-a628-0dfc4003f07d/crio-00c4138d2ab0d3a6880991ae6ca2f7c7e3c2de33b60a469043a91f7f8adef12d: Error finding container 00c4138d2ab0d3a6880991ae6ca2f7c7e3c2de33b60a469043a91f7f8adef12d: Status 404 returned error can't find the container with id 00c4138d2ab0d3a6880991ae6ca2f7c7e3c2de33b60a469043a91f7f8adef12d
	May 10 18:18:29 functional-581506 kubelet[6715]: E0510 18:18:29.944311    6715 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod0a886f34999ac0d6b56a638cab77f640/crio-6ed00def2c968d5a51634c7dafc6e6cc749b20e361a2365659842d41ca79ff9c: Error finding container 6ed00def2c968d5a51634c7dafc6e6cc749b20e361a2365659842d41ca79ff9c: Status 404 returned error can't find the container with id 6ed00def2c968d5a51634c7dafc6e6cc749b20e361a2365659842d41ca79ff9c
	May 10 18:18:29 functional-581506 kubelet[6715]: E0510 18:18:29.944603    6715 manager.go:1116] Failed to create existing container: /kubepods/burstable/podb2dc81ade1bbda73868f61223889f8f4/crio-45aa7f96fbe49dd74e9cdfcc97884ce5caba88b39b6e9b00f2357661ecbba1a3: Error finding container 45aa7f96fbe49dd74e9cdfcc97884ce5caba88b39b6e9b00f2357661ecbba1a3: Status 404 returned error can't find the container with id 45aa7f96fbe49dd74e9cdfcc97884ce5caba88b39b6e9b00f2357661ecbba1a3
	May 10 18:18:29 functional-581506 kubelet[6715]: E0510 18:18:29.944901    6715 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod0b1c5c10-5db3-43e0-935a-0549799273f3/crio-e30af250008246b61b90a3718d1c328f2984559c29b8526e0386129454a98b4a: Error finding container e30af250008246b61b90a3718d1c328f2984559c29b8526e0386129454a98b4a: Status 404 returned error can't find the container with id e30af250008246b61b90a3718d1c328f2984559c29b8526e0386129454a98b4a
	May 10 18:18:30 functional-581506 kubelet[6715]: E0510 18:18:30.181799    6715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746901110181192391,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:165066,},InodesUsed:&UInt64Value{Value:82,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:18:30 functional-581506 kubelet[6715]: E0510 18:18:30.181952    6715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746901110181192391,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:165066,},InodesUsed:&UInt64Value{Value:82,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:18:40 functional-581506 kubelet[6715]: E0510 18:18:40.184151    6715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746901120183647609,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:165066,},InodesUsed:&UInt64Value{Value:82,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:18:40 functional-581506 kubelet[6715]: E0510 18:18:40.184199    6715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746901120183647609,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:165066,},InodesUsed:&UInt64Value{Value:82,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:18:42 functional-581506 kubelet[6715]: E0510 18:18:42.830045    6715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-581506_kube-system_0a886f34999ac0d6b56a638cab77f640_2\" already exists"
	May 10 18:18:42 functional-581506 kubelet[6715]: E0510 18:18:42.830412    6715 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-581506_kube-system_0a886f34999ac0d6b56a638cab77f640_2\" already exists" pod="kube-system/kube-scheduler-functional-581506"
	May 10 18:18:42 functional-581506 kubelet[6715]: E0510 18:18:42.830474    6715 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-581506_kube-system_0a886f34999ac0d6b56a638cab77f640_2\" already exists" pod="kube-system/kube-scheduler-functional-581506"
	May 10 18:18:42 functional-581506 kubelet[6715]: E0510 18:18:42.830583    6715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-functional-581506_kube-system(0a886f34999ac0d6b56a638cab77f640)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-functional-581506_kube-system(0a886f34999ac0d6b56a638cab77f640)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-functional-581506_kube-system_0a886f34999ac0d6b56a638cab77f640_2\\\" already exists\"" pod="kube-system/kube-scheduler-functional-581506" podUID="0a886f34999ac0d6b56a638cab77f640"
	May 10 18:18:50 functional-581506 kubelet[6715]: E0510 18:18:50.186953    6715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746901130186531322,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:165066,},InodesUsed:&UInt64Value{Value:82,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:18:50 functional-581506 kubelet[6715]: E0510 18:18:50.187050    6715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746901130186531322,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:165066,},InodesUsed:&UInt64Value{Value:82,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:18:55 functional-581506 kubelet[6715]: E0510 18:18:55.837949    6715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-581506_kube-system_0a886f34999ac0d6b56a638cab77f640_2\" already exists"
	May 10 18:18:55 functional-581506 kubelet[6715]: E0510 18:18:55.838004    6715 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-581506_kube-system_0a886f34999ac0d6b56a638cab77f640_2\" already exists" pod="kube-system/kube-scheduler-functional-581506"
	May 10 18:18:55 functional-581506 kubelet[6715]: E0510 18:18:55.838022    6715 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-581506_kube-system_0a886f34999ac0d6b56a638cab77f640_2\" already exists" pod="kube-system/kube-scheduler-functional-581506"
	May 10 18:18:55 functional-581506 kubelet[6715]: E0510 18:18:55.838063    6715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-functional-581506_kube-system(0a886f34999ac0d6b56a638cab77f640)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-functional-581506_kube-system(0a886f34999ac0d6b56a638cab77f640)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-functional-581506_kube-system_0a886f34999ac0d6b56a638cab77f640_2\\\" already exists\"" pod="kube-system/kube-scheduler-functional-581506" podUID="0a886f34999ac0d6b56a638cab77f640"
	
	
	==> storage-provisioner [206d421221f482411c4e5a5ef3f7102eccd8b38f07c242446855962f9958f985] <==
	W0510 18:18:32.046103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:34.050030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:34.055107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:36.058985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:36.064413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:38.068245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:38.077244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:40.081240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:40.087718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:42.091035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:42.100378       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:44.103196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:44.109693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:46.113155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:46.123440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:48.127563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:48.132852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:50.136646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:50.146058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:52.149498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:52.156146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:54.159111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:54.164741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:56.168527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:56.174312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [9ddf6914642a098d580c48db641460c4197df74a06bf7008e362f610f185934d] <==
	I0510 18:02:23.672106       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0510 18:02:23.683594       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0510 18:02:23.683625       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0510 18:02:23.702833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:02:27.159140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:02:31.422834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:02:35.021700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:02:38.075295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:02:41.098182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:02:41.109385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0510 18:02:41.109555       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0510 18:02:41.109770       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-581506_a46214da-4c1e-4fc3-976f-44d996fb2ca3!
	I0510 18:02:41.110126       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fd69f3b7-01e0-4535-950c-10464666b122", APIVersion:"v1", ResourceVersion:"525", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-581506_a46214da-4c1e-4fc3-976f-44d996fb2ca3 became leader
	W0510 18:02:41.126141       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:02:41.133416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0510 18:02:41.210982       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-581506_a46214da-4c1e-4fc3-976f-44d996fb2ca3!
	W0510 18:02:43.137335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:02:43.144935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:02:45.148031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:02:45.154106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:02:47.157824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:02:47.175021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-581506 -n functional-581506
helpers_test.go:261: (dbg) Run:  kubectl --context functional-581506 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount hello-node-connect-58f9cf68d8-prxzn hello-node-fcfd88b6f-gmdwq mysql-58ccfd96bb-2jm87 sp-pod dashboard-metrics-scraper-5d59dccf9b-w9spf kubernetes-dashboard-7779f9b69b-ljpkm
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-581506 describe pod busybox-mount hello-node-connect-58f9cf68d8-prxzn hello-node-fcfd88b6f-gmdwq mysql-58ccfd96bb-2jm87 sp-pod dashboard-metrics-scraper-5d59dccf9b-w9spf kubernetes-dashboard-7779f9b69b-ljpkm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-581506 describe pod busybox-mount hello-node-connect-58f9cf68d8-prxzn hello-node-fcfd88b6f-gmdwq mysql-58ccfd96bb-2jm87 sp-pod dashboard-metrics-scraper-5d59dccf9b-w9spf kubernetes-dashboard-7779f9b69b-ljpkm: exit status 1 (92.589162ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  mount-munger:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    Environment:  <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t5rkt (ro)
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-t5rkt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             hello-node-connect-58f9cf68d8-prxzn
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=hello-node-connect
	                  pod-template-hash=58f9cf68d8
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-connect-58f9cf68d8
	Containers:
	  echoserver:
	    Image:        registry.k8s.io/echoserver:1.8
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vjht5 (ro)
	Volumes:
	  kube-api-access-vjht5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             hello-node-fcfd88b6f-gmdwq
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=hello-node
	                  pod-template-hash=fcfd88b6f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-fcfd88b6f
	Containers:
	  echoserver:
	    Image:        registry.k8s.io/echoserver:1.8
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-56j2g (ro)
	Volumes:
	  kube-api-access-56j2g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             mysql-58ccfd96bb-2jm87
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=mysql
	                  pod-template-hash=58ccfd96bb
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/mysql-58ccfd96bb
	Containers:
	  mysql:
	    Image:      docker.io/mysql:5.7
	    Port:       3306/TCP
	    Host Port:  0/TCP
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v72cr (ro)
	Volumes:
	  kube-api-access-v72cr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  myfrontend:
	    Image:        docker.io/nginx
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6q8c7 (ro)
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-6q8c7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5d59dccf9b-w9spf" not found
	Error from server (NotFound): pods "kubernetes-dashboard-7779f9b69b-ljpkm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-581506 describe pod busybox-mount hello-node-connect-58f9cf68d8-prxzn hello-node-fcfd88b6f-gmdwq mysql-58ccfd96bb-2jm87 sp-pod dashboard-metrics-scraper-5d59dccf9b-w9spf kubernetes-dashboard-7779f9b69b-ljpkm: exit status 1
E0510 18:19:37.819068  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.48s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (187.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [ea7d9372-7c9e-444b-a628-0dfc4003f07d] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0041607s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-581506 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-581506 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-581506 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-581506 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8bc72d16-7952-465f-ae78-44b54d916720] Pending
E0510 18:09:37.810377  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-581506 -n functional-581506
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-05-10 18:11:55.622202258 +0000 UTC m=+1189.949814897
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-581506 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-581506 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Containers:
myfrontend:
Image:        docker.io/nginx
Port:         <none>
Host Port:    <none>
Environment:  <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6q8c7 (ro)
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-6q8c7:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-581506 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-581506 logs sp-pod -n default:
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-581506 -n functional-581506
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-581506 logs -n 25: (1.451914423s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                  Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh     | functional-581506 ssh sudo cat                                          | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:08 UTC | 10 May 25 18:08 UTC |
	|         | /etc/ssl/certs/395980.pem                                               |                   |         |         |                     |                     |
	| ssh     | functional-581506 ssh sudo cat                                          | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:08 UTC | 10 May 25 18:08 UTC |
	|         | /etc/test/nested/copy/395980/hosts                                      |                   |         |         |                     |                     |
	| ssh     | functional-581506 ssh -n                                                | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:08 UTC | 10 May 25 18:08 UTC |
	|         | functional-581506 sudo cat                                              |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                |                   |         |         |                     |                     |
	| ssh     | functional-581506 ssh sudo cat                                          | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:08 UTC | 10 May 25 18:08 UTC |
	|         | /usr/share/ca-certificates/395980.pem                                   |                   |         |         |                     |                     |
	| cp      | functional-581506 cp                                                    | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:08 UTC | 10 May 25 18:08 UTC |
	|         | testdata/cp-test.txt                                                    |                   |         |         |                     |                     |
	|         | /tmp/does/not/exist/cp-test.txt                                         |                   |         |         |                     |                     |
	| ssh     | functional-581506 ssh sudo cat                                          | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:08 UTC | 10 May 25 18:08 UTC |
	|         | /etc/ssl/certs/51391683.0                                               |                   |         |         |                     |                     |
	| ssh     | functional-581506 ssh -n                                                | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:08 UTC | 10 May 25 18:08 UTC |
	|         | functional-581506 sudo cat                                              |                   |         |         |                     |                     |
	|         | /tmp/does/not/exist/cp-test.txt                                         |                   |         |         |                     |                     |
	| ssh     | functional-581506 ssh sudo cat                                          | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:08 UTC | 10 May 25 18:08 UTC |
	|         | /etc/ssl/certs/3959802.pem                                              |                   |         |         |                     |                     |
	| ssh     | functional-581506 ssh sudo cat                                          | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:08 UTC | 10 May 25 18:08 UTC |
	|         | /usr/share/ca-certificates/3959802.pem                                  |                   |         |         |                     |                     |
	| ssh     | functional-581506 ssh sudo cat                                          | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:08 UTC | 10 May 25 18:08 UTC |
	|         | /etc/ssl/certs/3ec20f2e.0                                               |                   |         |         |                     |                     |
	| image   | functional-581506 image ls                                              | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:08 UTC | 10 May 25 18:08 UTC |
	| ssh     | functional-581506 ssh echo                                              | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:08 UTC | 10 May 25 18:08 UTC |
	|         | hello                                                                   |                   |         |         |                     |                     |
	| image   | functional-581506 image load --daemon                                   | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:08 UTC | 10 May 25 18:08 UTC |
	|         | kicbase/echo-server:functional-581506                                   |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| ssh     | functional-581506 ssh cat                                               | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:08 UTC | 10 May 25 18:08 UTC |
	|         | /etc/hostname                                                           |                   |         |         |                     |                     |
	| image   | functional-581506 image ls                                              | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:08 UTC | 10 May 25 18:08 UTC |
	| image   | functional-581506 image load --daemon                                   | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:08 UTC | 10 May 25 18:08 UTC |
	|         | kicbase/echo-server:functional-581506                                   |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image   | functional-581506 image ls                                              | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:08 UTC | 10 May 25 18:08 UTC |
	| image   | functional-581506 image save kicbase/echo-server:functional-581506      | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:08 UTC | 10 May 25 18:08 UTC |
	|         | /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image   | functional-581506 image rm                                              | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:08 UTC | 10 May 25 18:08 UTC |
	|         | kicbase/echo-server:functional-581506                                   |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image   | functional-581506 image ls                                              | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:08 UTC | 10 May 25 18:08 UTC |
	| image   | functional-581506 image load                                            | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:08 UTC | 10 May 25 18:08 UTC |
	|         | /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image   | functional-581506 image ls                                              | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:08 UTC | 10 May 25 18:08 UTC |
	| image   | functional-581506 image save --daemon                                   | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:08 UTC | 10 May 25 18:08 UTC |
	|         | kicbase/echo-server:functional-581506                                   |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| addons  | functional-581506 addons list                                           | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:08 UTC | 10 May 25 18:08 UTC |
	| addons  | functional-581506 addons list                                           | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:08 UTC | 10 May 25 18:08 UTC |
	|         | -o json                                                                 |                   |         |         |                     |                     |
	|---------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/05/10 18:02:46
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0510 18:02:46.396513  402035 out.go:345] Setting OutFile to fd 1 ...
	I0510 18:02:46.396636  402035 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 18:02:46.396640  402035 out.go:358] Setting ErrFile to fd 2...
	I0510 18:02:46.396643  402035 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 18:02:46.396841  402035 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-388787/.minikube/bin
	I0510 18:02:46.397369  402035 out.go:352] Setting JSON to false
	I0510 18:02:46.398311  402035 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27914,"bootTime":1746872252,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 18:02:46.398421  402035 start.go:140] virtualization: kvm guest
	I0510 18:02:46.400743  402035 out.go:177] * [functional-581506] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 18:02:46.402186  402035 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 18:02:46.402177  402035 notify.go:220] Checking for updates...
	I0510 18:02:46.403510  402035 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 18:02:46.405219  402035 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-388787/kubeconfig
	I0510 18:02:46.406775  402035 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-388787/.minikube
	I0510 18:02:46.408169  402035 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 18:02:46.409488  402035 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 18:02:46.411314  402035 config.go:182] Loaded profile config "functional-581506": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 18:02:46.411402  402035 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 18:02:46.411895  402035 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 18:02:46.411958  402035 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:02:46.428015  402035 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42895
	I0510 18:02:46.428521  402035 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:02:46.429033  402035 main.go:141] libmachine: Using API Version  1
	I0510 18:02:46.429050  402035 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:02:46.429423  402035 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:02:46.429597  402035 main.go:141] libmachine: (functional-581506) Calling .DriverName
	I0510 18:02:46.464202  402035 out.go:177] * Using the kvm2 driver based on existing profile
	I0510 18:02:46.465611  402035 start.go:304] selected driver: kvm2
	I0510 18:02:46.465621  402035 start.go:908] validating driver "kvm2" against &{Name:functional-581506 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.33.0 ClusterName:functional-581506 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8441 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 18:02:46.465726  402035 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 18:02:46.466055  402035 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 18:02:46.466154  402035 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20720-388787/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0510 18:02:46.483313  402035 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0510 18:02:46.484300  402035 start_flags.go:975] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0510 18:02:46.484336  402035 cni.go:84] Creating CNI manager for ""
	I0510 18:02:46.484393  402035 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0510 18:02:46.484445  402035 start.go:347] cluster config:
	{Name:functional-581506 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:functional-581506 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8441 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 18:02:46.484546  402035 iso.go:125] acquiring lock: {Name:mk19640015999219180c6685480547adf0c02201 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 18:02:46.486929  402035 out.go:177] * Starting "functional-581506" primary control-plane node in "functional-581506" cluster
	I0510 18:02:46.488381  402035 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
	I0510 18:02:46.488424  402035 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4
	I0510 18:02:46.488433  402035 cache.go:56] Caching tarball of preloaded images
	I0510 18:02:46.488558  402035 preload.go:172] Found /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0510 18:02:46.488566  402035 cache.go:59] Finished verifying existence of preloaded tar for v1.33.0 on crio
	I0510 18:02:46.488662  402035 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/config.json ...
	I0510 18:02:46.488872  402035 start.go:360] acquireMachinesLock for functional-581506: {Name:mk11499d7756d503a7a24339ad1a7f9ab9dc0fab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0510 18:02:46.488936  402035 start.go:364] duration metric: took 49.209µs to acquireMachinesLock for "functional-581506"
	I0510 18:02:46.488949  402035 start.go:96] Skipping create...Using existing machine configuration
	I0510 18:02:46.488953  402035 fix.go:54] fixHost starting: 
	I0510 18:02:46.489257  402035 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 18:02:46.489298  402035 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:02:46.505903  402035 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46635
	I0510 18:02:46.506581  402035 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:02:46.507080  402035 main.go:141] libmachine: Using API Version  1
	I0510 18:02:46.507090  402035 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:02:46.507470  402035 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:02:46.507695  402035 main.go:141] libmachine: (functional-581506) Calling .DriverName
	I0510 18:02:46.507905  402035 main.go:141] libmachine: (functional-581506) Calling .GetState
	I0510 18:02:46.509827  402035 fix.go:112] recreateIfNeeded on functional-581506: state=Running err=<nil>
	W0510 18:02:46.509841  402035 fix.go:138] unexpected machine state, will restart: <nil>
	I0510 18:02:46.512283  402035 out.go:177] * Updating the running kvm2 "functional-581506" VM ...
	I0510 18:02:46.513904  402035 machine.go:93] provisionDockerMachine start ...
	I0510 18:02:46.513940  402035 main.go:141] libmachine: (functional-581506) Calling .DriverName
	I0510 18:02:46.514326  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHHostname
	I0510 18:02:46.517256  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:46.517672  402035 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
	I0510 18:02:46.517709  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:46.517917  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHPort
	I0510 18:02:46.518128  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:46.518280  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:46.518424  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHUsername
	I0510 18:02:46.518549  402035 main.go:141] libmachine: Using SSH client type: native
	I0510 18:02:46.518772  402035 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I0510 18:02:46.518777  402035 main.go:141] libmachine: About to run SSH command:
	hostname
	I0510 18:02:46.640153  402035 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-581506
	
	I0510 18:02:46.640174  402035 main.go:141] libmachine: (functional-581506) Calling .GetMachineName
	I0510 18:02:46.640441  402035 buildroot.go:166] provisioning hostname "functional-581506"
	I0510 18:02:46.640464  402035 main.go:141] libmachine: (functional-581506) Calling .GetMachineName
	I0510 18:02:46.640667  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHHostname
	I0510 18:02:46.643291  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:46.643617  402035 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
	I0510 18:02:46.643642  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:46.643791  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHPort
	I0510 18:02:46.644010  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:46.644246  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:46.644473  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHUsername
	I0510 18:02:46.644671  402035 main.go:141] libmachine: Using SSH client type: native
	I0510 18:02:46.644975  402035 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I0510 18:02:46.644986  402035 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-581506 && echo "functional-581506" | sudo tee /etc/hostname
	I0510 18:02:46.783110  402035 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-581506
	
	I0510 18:02:46.783132  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHHostname
	I0510 18:02:46.786450  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:46.786777  402035 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
	I0510 18:02:46.786821  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:46.787057  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHPort
	I0510 18:02:46.787283  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:46.787424  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:46.787531  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHUsername
	I0510 18:02:46.787679  402035 main.go:141] libmachine: Using SSH client type: native
	I0510 18:02:46.787970  402035 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I0510 18:02:46.787987  402035 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-581506' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-581506/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-581506' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0510 18:02:46.908762  402035 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0510 18:02:46.908797  402035 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20720-388787/.minikube CaCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20720-388787/.minikube}
	I0510 18:02:46.908841  402035 buildroot.go:174] setting up certificates
	I0510 18:02:46.908855  402035 provision.go:84] configureAuth start
	I0510 18:02:46.908864  402035 main.go:141] libmachine: (functional-581506) Calling .GetMachineName
	I0510 18:02:46.909218  402035 main.go:141] libmachine: (functional-581506) Calling .GetIP
	I0510 18:02:46.911981  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:46.912317  402035 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
	I0510 18:02:46.912335  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:46.912579  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHHostname
	I0510 18:02:46.915330  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:46.915770  402035 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
	I0510 18:02:46.915808  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:46.915943  402035 provision.go:143] copyHostCerts
	I0510 18:02:46.916005  402035 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem, removing ...
	I0510 18:02:46.916026  402035 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem
	I0510 18:02:46.916089  402035 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem (1078 bytes)
	I0510 18:02:46.916183  402035 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem, removing ...
	I0510 18:02:46.916187  402035 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem
	I0510 18:02:46.916210  402035 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem (1123 bytes)
	I0510 18:02:46.916258  402035 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem, removing ...
	I0510 18:02:46.916261  402035 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem
	I0510 18:02:46.916283  402035 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem (1675 bytes)
	I0510 18:02:46.916322  402035 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem org=jenkins.functional-581506 san=[127.0.0.1 192.168.39.52 functional-581506 localhost minikube]
	I0510 18:02:47.231951  402035 provision.go:177] copyRemoteCerts
	I0510 18:02:47.232007  402035 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0510 18:02:47.232032  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHHostname
	I0510 18:02:47.235562  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:47.235996  402035 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
	I0510 18:02:47.236028  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:47.236244  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHPort
	I0510 18:02:47.236501  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:47.236684  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHUsername
	I0510 18:02:47.236859  402035 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/functional-581506/id_rsa Username:docker}
	I0510 18:02:47.328493  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0510 18:02:47.362929  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0510 18:02:47.402301  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0510 18:02:47.436276  402035 provision.go:87] duration metric: took 527.405123ms to configureAuth
	I0510 18:02:47.436303  402035 buildroot.go:189] setting minikube options for container-runtime
	I0510 18:02:47.436596  402035 config.go:182] Loaded profile config "functional-581506": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 18:02:47.436690  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHHostname
	I0510 18:02:47.440022  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:47.440415  402035 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
	I0510 18:02:47.440441  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:47.440681  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHPort
	I0510 18:02:47.440965  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:47.441340  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:47.441565  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHUsername
	I0510 18:02:47.441774  402035 main.go:141] libmachine: Using SSH client type: native
	I0510 18:02:47.442138  402035 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I0510 18:02:47.442150  402035 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0510 18:02:53.140976  402035 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0510 18:02:53.140992  402035 machine.go:96] duration metric: took 6.627069114s to provisionDockerMachine
	I0510 18:02:53.141003  402035 start.go:293] postStartSetup for "functional-581506" (driver="kvm2")
	I0510 18:02:53.141012  402035 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0510 18:02:53.141027  402035 main.go:141] libmachine: (functional-581506) Calling .DriverName
	I0510 18:02:53.141384  402035 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0510 18:02:53.141411  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHHostname
	I0510 18:02:53.144494  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:53.144834  402035 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
	I0510 18:02:53.144853  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:53.144999  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHPort
	I0510 18:02:53.145178  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:53.145322  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHUsername
	I0510 18:02:53.145457  402035 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/functional-581506/id_rsa Username:docker}
	I0510 18:02:53.240441  402035 ssh_runner.go:195] Run: cat /etc/os-release
	I0510 18:02:53.245712  402035 info.go:137] Remote host: Buildroot 2024.11.2
	I0510 18:02:53.245743  402035 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-388787/.minikube/addons for local assets ...
	I0510 18:02:53.245813  402035 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-388787/.minikube/files for local assets ...
	I0510 18:02:53.245880  402035 filesync.go:149] local asset: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem -> 3959802.pem in /etc/ssl/certs
	I0510 18:02:53.245953  402035 filesync.go:149] local asset: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/test/nested/copy/395980/hosts -> hosts in /etc/test/nested/copy/395980
	I0510 18:02:53.245988  402035 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/395980
	I0510 18:02:53.258624  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem --> /etc/ssl/certs/3959802.pem (1708 bytes)
	I0510 18:02:53.295954  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/test/nested/copy/395980/hosts --> /etc/test/nested/copy/395980/hosts (40 bytes)
	I0510 18:02:53.327666  402035 start.go:296] duration metric: took 186.648319ms for postStartSetup
	I0510 18:02:53.327715  402035 fix.go:56] duration metric: took 6.838760767s for fixHost
	I0510 18:02:53.327740  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHHostname
	I0510 18:02:53.330484  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:53.330859  402035 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
	I0510 18:02:53.330890  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:53.331009  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHPort
	I0510 18:02:53.331230  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:53.331412  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:53.331544  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHUsername
	I0510 18:02:53.331662  402035 main.go:141] libmachine: Using SSH client type: native
	I0510 18:02:53.331877  402035 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I0510 18:02:53.331882  402035 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0510 18:02:53.453061  402035 main.go:141] libmachine: SSH cmd err, output: <nil>: 1746900173.447130487
	
	I0510 18:02:53.453092  402035 fix.go:216] guest clock: 1746900173.447130487
	I0510 18:02:53.453099  402035 fix.go:229] Guest: 2025-05-10 18:02:53.447130487 +0000 UTC Remote: 2025-05-10 18:02:53.327719446 +0000 UTC m=+6.971359045 (delta=119.411041ms)
	I0510 18:02:53.453119  402035 fix.go:200] guest clock delta is within tolerance: 119.411041ms
	I0510 18:02:53.453123  402035 start.go:83] releasing machines lock for "functional-581506", held for 6.964180893s
	I0510 18:02:53.453145  402035 main.go:141] libmachine: (functional-581506) Calling .DriverName
	I0510 18:02:53.453448  402035 main.go:141] libmachine: (functional-581506) Calling .GetIP
	I0510 18:02:53.456220  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:53.456476  402035 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
	I0510 18:02:53.456494  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:53.456627  402035 main.go:141] libmachine: (functional-581506) Calling .DriverName
	I0510 18:02:53.457205  402035 main.go:141] libmachine: (functional-581506) Calling .DriverName
	I0510 18:02:53.457369  402035 main.go:141] libmachine: (functional-581506) Calling .DriverName
	I0510 18:02:53.457461  402035 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0510 18:02:53.457506  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHHostname
	I0510 18:02:53.457607  402035 ssh_runner.go:195] Run: cat /version.json
	I0510 18:02:53.457625  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHHostname
	I0510 18:02:53.460159  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:53.460383  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:53.460534  402035 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
	I0510 18:02:53.460568  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:53.460745  402035 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
	I0510 18:02:53.460761  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:02:53.460773  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHPort
	I0510 18:02:53.460958  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:53.460967  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHPort
	I0510 18:02:53.461130  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHUsername
	I0510 18:02:53.461146  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
	I0510 18:02:53.461326  402035 main.go:141] libmachine: (functional-581506) Calling .GetSSHUsername
	I0510 18:02:53.461314  402035 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/functional-581506/id_rsa Username:docker}
	I0510 18:02:53.461447  402035 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/functional-581506/id_rsa Username:docker}
	I0510 18:02:53.559403  402035 ssh_runner.go:195] Run: systemctl --version
	I0510 18:02:53.582132  402035 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0510 18:02:53.770630  402035 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0510 18:02:53.783161  402035 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0510 18:02:53.783285  402035 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0510 18:02:53.798993  402035 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0510 18:02:53.799013  402035 start.go:495] detecting cgroup driver to use...
	I0510 18:02:53.799097  402035 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0510 18:02:53.823538  402035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0510 18:02:53.848708  402035 docker.go:225] disabling cri-docker service (if available) ...
	I0510 18:02:53.848771  402035 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0510 18:02:53.880475  402035 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0510 18:02:53.909205  402035 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0510 18:02:54.228229  402035 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0510 18:02:54.462507  402035 docker.go:241] disabling docker service ...
	I0510 18:02:54.462575  402035 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0510 18:02:54.497169  402035 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0510 18:02:54.516357  402035 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0510 18:02:54.753088  402035 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0510 18:02:54.940449  402035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0510 18:02:54.956825  402035 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0510 18:02:54.980731  402035 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0510 18:02:54.980784  402035 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 18:02:54.993371  402035 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0510 18:02:54.993440  402035 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 18:02:55.006052  402035 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 18:02:55.018197  402035 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 18:02:55.030433  402035 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0510 18:02:55.045006  402035 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 18:02:55.057444  402035 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 18:02:55.071727  402035 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 18:02:55.084200  402035 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0510 18:02:55.096230  402035 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0510 18:02:55.107855  402035 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 18:02:55.290042  402035 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0510 18:04:25.856147  402035 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.566058413s)
	I0510 18:04:25.856185  402035 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0510 18:04:25.856270  402035 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0510 18:04:25.863129  402035 start.go:563] Will wait 60s for crictl version
	I0510 18:04:25.863197  402035 ssh_runner.go:195] Run: which crictl
	I0510 18:04:25.868051  402035 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0510 18:04:25.911506  402035 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0510 18:04:25.911578  402035 ssh_runner.go:195] Run: crio --version
	I0510 18:04:25.945197  402035 ssh_runner.go:195] Run: crio --version
	I0510 18:04:25.980379  402035 out.go:177] * Preparing Kubernetes v1.33.0 on CRI-O 1.29.1 ...
	I0510 18:04:25.982219  402035 main.go:141] libmachine: (functional-581506) Calling .GetIP
	I0510 18:04:25.985326  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:04:25.985730  402035 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
	I0510 18:04:25.985751  402035 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
	I0510 18:04:25.985941  402035 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0510 18:04:25.993435  402035 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0510 18:04:25.995308  402035 kubeadm.go:875] updating cluster {Name:functional-581506 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.33.0 ClusterName:functional-581506 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8441 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0510 18:04:25.995446  402035 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
	I0510 18:04:25.995518  402035 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 18:04:26.045932  402035 crio.go:514] all images are preloaded for cri-o runtime.
	I0510 18:04:26.045946  402035 crio.go:433] Images already preloaded, skipping extraction
	I0510 18:04:26.046014  402035 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 18:04:26.085235  402035 crio.go:514] all images are preloaded for cri-o runtime.
	I0510 18:04:26.085254  402035 cache_images.go:84] Images are preloaded, skipping loading
	I0510 18:04:26.085265  402035 kubeadm.go:926] updating node { 192.168.39.52 8441 v1.33.0 crio true true} ...
	I0510 18:04:26.085431  402035 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-581506 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.52
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.0 ClusterName:functional-581506 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0510 18:04:26.085506  402035 ssh_runner.go:195] Run: crio config
	I0510 18:04:26.138253  402035 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0510 18:04:26.138281  402035 cni.go:84] Creating CNI manager for ""
	I0510 18:04:26.138297  402035 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0510 18:04:26.138305  402035 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0510 18:04:26.138331  402035 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.52 APIServerPort:8441 KubernetesVersion:v1.33.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-581506 NodeName:functional-581506 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.52"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.52 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts
:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0510 18:04:26.138459  402035 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.52
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-581506"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.52"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.52"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0510 18:04:26.138527  402035 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.0
	I0510 18:04:26.152410  402035 binaries.go:44] Found k8s binaries, skipping transfer
	I0510 18:04:26.152484  402035 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0510 18:04:26.164608  402035 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0510 18:04:26.187091  402035 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0510 18:04:26.208040  402035 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2144 bytes)
	I0510 18:04:26.231151  402035 ssh_runner.go:195] Run: grep 192.168.39.52	control-plane.minikube.internal$ /etc/hosts
	I0510 18:04:26.235726  402035 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 18:04:26.416698  402035 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 18:04:26.435417  402035 certs.go:68] Setting up /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506 for IP: 192.168.39.52
	I0510 18:04:26.435435  402035 certs.go:194] generating shared ca certs ...
	I0510 18:04:26.435455  402035 certs.go:226] acquiring lock for ca certs: {Name:mk8db74782205da4ac57ef815dd495cda255251a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 18:04:26.435657  402035 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20720-388787/.minikube/ca.key
	I0510 18:04:26.435715  402035 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.key
	I0510 18:04:26.435724  402035 certs.go:256] generating profile certs ...
	I0510 18:04:26.435807  402035 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.key
	I0510 18:04:26.435852  402035 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/apiserver.key.e77f3034
	I0510 18:04:26.435879  402035 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/proxy-client.key
	I0510 18:04:26.435998  402035 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980.pem (1338 bytes)
	W0510 18:04:26.436022  402035 certs.go:480] ignoring /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980_empty.pem, impossibly tiny 0 bytes
	I0510 18:04:26.436028  402035 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem (1679 bytes)
	I0510 18:04:26.436049  402035 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem (1078 bytes)
	I0510 18:04:26.436067  402035 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem (1123 bytes)
	I0510 18:04:26.436088  402035 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem (1675 bytes)
	I0510 18:04:26.436136  402035 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem (1708 bytes)
	I0510 18:04:26.436850  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0510 18:04:26.469054  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0510 18:04:26.499255  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0510 18:04:26.529739  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0510 18:04:26.561946  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0510 18:04:26.595162  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0510 18:04:26.627840  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0510 18:04:26.659449  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0510 18:04:26.693269  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980.pem --> /usr/share/ca-certificates/395980.pem (1338 bytes)
	I0510 18:04:26.724816  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem --> /usr/share/ca-certificates/3959802.pem (1708 bytes)
	I0510 18:04:26.754834  402035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0510 18:04:26.787011  402035 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0510 18:04:26.809227  402035 ssh_runner.go:195] Run: openssl version
	I0510 18:04:26.817671  402035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3959802.pem && ln -fs /usr/share/ca-certificates/3959802.pem /etc/ssl/certs/3959802.pem"
	I0510 18:04:26.831583  402035 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3959802.pem
	I0510 18:04:26.837165  402035 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 10 18:00 /usr/share/ca-certificates/3959802.pem
	I0510 18:04:26.837228  402035 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3959802.pem
	I0510 18:04:26.845401  402035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3959802.pem /etc/ssl/certs/3ec20f2e.0"
	I0510 18:04:26.857985  402035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0510 18:04:26.871727  402035 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0510 18:04:26.877551  402035 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 10 17:52 /usr/share/ca-certificates/minikubeCA.pem
	I0510 18:04:26.877655  402035 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0510 18:04:26.885597  402035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0510 18:04:26.897966  402035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/395980.pem && ln -fs /usr/share/ca-certificates/395980.pem /etc/ssl/certs/395980.pem"
	I0510 18:04:26.911449  402035 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/395980.pem
	I0510 18:04:26.917136  402035 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 10 18:00 /usr/share/ca-certificates/395980.pem
	I0510 18:04:26.917209  402035 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/395980.pem
	I0510 18:04:26.924808  402035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/395980.pem /etc/ssl/certs/51391683.0"
	I0510 18:04:26.957285  402035 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0510 18:04:26.969150  402035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0510 18:04:26.987736  402035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0510 18:04:27.006182  402035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0510 18:04:27.022469  402035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0510 18:04:27.031936  402035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0510 18:04:27.044701  402035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0510 18:04:27.061968  402035 kubeadm.go:392] StartCluster: {Name:functional-581506 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33
.0 ClusterName:functional-581506 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8441 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStri
ng:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 18:04:27.062052  402035 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0510 18:04:27.062122  402035 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0510 18:04:27.165135  402035 cri.go:89] found id: "67bef24b725ebf7a2b7f343d7456516d6b5de38118f9cf48e7d70d9146ce2087"
	I0510 18:04:27.165148  402035 cri.go:89] found id: "2b4eccacbeea6a58cc9c575f2c2bf5f8297029f9c9d2a9264bcf3e69644b4c28"
	I0510 18:04:27.165151  402035 cri.go:89] found id: "9ddf6914642a098d580c48db641460c4197df74a06bf7008e362f610f185934d"
	I0510 18:04:27.165153  402035 cri.go:89] found id: "08d812eb972925640e90642a5458269dea94298436a73e78a578d0bfe369daaf"
	I0510 18:04:27.165155  402035 cri.go:89] found id: "74fd0b7de642965eb7e03cf324017cb2195034685758e46efbd5e6997aba9ae5"
	I0510 18:04:27.165157  402035 cri.go:89] found id: "5879bea6c3a25517766471c3eec758ce0c6d853db7055e1f3505263a674ed969"
	I0510 18:04:27.165158  402035 cri.go:89] found id: "bc42d63e6220a437de1d056d765ed97df2e6978798401b10283f61c7b1bc895b"
	I0510 18:04:27.165160  402035 cri.go:89] found id: ""
	I0510 18:04:27.165206  402035 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-581506 -n functional-581506
helpers_test.go:261: (dbg) Run:  kubectl --context functional-581506 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-node-connect-58f9cf68d8-prxzn hello-node-fcfd88b6f-gmdwq mysql-58ccfd96bb-2jm87 sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-581506 describe pod hello-node-connect-58f9cf68d8-prxzn hello-node-fcfd88b6f-gmdwq mysql-58ccfd96bb-2jm87 sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-581506 describe pod hello-node-connect-58f9cf68d8-prxzn hello-node-fcfd88b6f-gmdwq mysql-58ccfd96bb-2jm87 sp-pod:

                                                
                                                
-- stdout --
	Name:             hello-node-connect-58f9cf68d8-prxzn
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=hello-node-connect
	                  pod-template-hash=58f9cf68d8
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-connect-58f9cf68d8
	Containers:
	  echoserver:
	    Image:        registry.k8s.io/echoserver:1.8
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vjht5 (ro)
	Volumes:
	  kube-api-access-vjht5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             hello-node-fcfd88b6f-gmdwq
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=hello-node
	                  pod-template-hash=fcfd88b6f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-fcfd88b6f
	Containers:
	  echoserver:
	    Image:        registry.k8s.io/echoserver:1.8
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-56j2g (ro)
	Volumes:
	  kube-api-access-56j2g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             mysql-58ccfd96bb-2jm87
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=mysql
	                  pod-template-hash=58ccfd96bb
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/mysql-58ccfd96bb
	Containers:
	  mysql:
	    Image:      docker.io/mysql:5.7
	    Port:       3306/TCP
	    Host Port:  0/TCP
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v72cr (ro)
	Volumes:
	  kube-api-access-v72cr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  myfrontend:
	    Image:        docker.io/nginx
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6q8c7 (ro)
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-6q8c7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (187.94s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (603.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-581506 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-2jm87" [1ee3d4bc-f507-476e-98f7-9578ceaa4ca3] Pending
helpers_test.go:329: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1816: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1816: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-581506 -n functional-581506
functional_test.go:1816: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-05-10 18:18:48.776480454 +0000 UTC m=+1603.104093101
functional_test.go:1816: (dbg) Run:  kubectl --context functional-581506 describe po mysql-58ccfd96bb-2jm87 -n default
functional_test.go:1816: (dbg) kubectl --context functional-581506 describe po mysql-58ccfd96bb-2jm87 -n default:
Name:             mysql-58ccfd96bb-2jm87
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           app=mysql
pod-template-hash=58ccfd96bb
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/mysql-58ccfd96bb
Containers:
mysql:
Image:      docker.io/mysql:5.7
Port:       3306/TCP
Host Port:  0/TCP
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v72cr (ro)
Volumes:
kube-api-access-v72cr:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>
functional_test.go:1816: (dbg) Run:  kubectl --context functional-581506 logs mysql-58ccfd96bb-2jm87 -n default
functional_test.go:1816: (dbg) kubectl --context functional-581506 logs mysql-58ccfd96bb-2jm87 -n default:
functional_test.go:1818: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-581506 -n functional-581506
helpers_test.go:244: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-581506 logs -n 25: (1.580987763s)
helpers_test.go:252: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	|-----------|------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                  Args                                  |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| addons    | functional-581506 addons list                                          | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:08 UTC | 10 May 25 18:08 UTC |
	|           | -o json                                                                |                   |         |         |                     |                     |
	| mount     | -p functional-581506                                                   | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:11 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdany-port178078978/001:/mount-9p     |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                 |                   |         |         |                     |                     |
	| ssh       | functional-581506 ssh findmnt                                          | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:11 UTC |                     |
	|           | -T /mount-9p | grep 9p                                                 |                   |         |         |                     |                     |
	| ssh       | functional-581506 ssh findmnt                                          | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:12 UTC | 10 May 25 18:12 UTC |
	|           | -T /mount-9p | grep 9p                                                 |                   |         |         |                     |                     |
	| ssh       | functional-581506 ssh -- ls                                            | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:12 UTC | 10 May 25 18:12 UTC |
	|           | -la /mount-9p                                                          |                   |         |         |                     |                     |
	| ssh       | functional-581506 ssh cat                                              | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:12 UTC | 10 May 25 18:12 UTC |
	|           | /mount-9p/test-1746900719767655459                                     |                   |         |         |                     |                     |
	| ssh       | functional-581506 ssh mount |                                          | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC |                     |
	|           | grep 9p; ls -la /mount-9p; cat                                         |                   |         |         |                     |                     |
	|           | /mount-9p/pod-dates                                                    |                   |         |         |                     |                     |
	| ssh       | functional-581506 ssh sudo                                             | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC | 10 May 25 18:16 UTC |
	|           | umount -f /mount-9p                                                    |                   |         |         |                     |                     |
	| mount     | -p functional-581506                                                   | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdspecific-port44352173/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                    |                   |         |         |                     |                     |
	| ssh       | functional-581506 ssh findmnt                                          | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC |                     |
	|           | -T /mount-9p | grep 9p                                                 |                   |         |         |                     |                     |
	| ssh       | functional-581506 ssh findmnt                                          | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC | 10 May 25 18:16 UTC |
	|           | -T /mount-9p | grep 9p                                                 |                   |         |         |                     |                     |
	| ssh       | functional-581506 ssh -- ls                                            | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC | 10 May 25 18:16 UTC |
	|           | -la /mount-9p                                                          |                   |         |         |                     |                     |
	| ssh       | functional-581506 ssh sudo                                             | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC |                     |
	|           | umount -f /mount-9p                                                    |                   |         |         |                     |                     |
	| mount     | -p functional-581506                                                   | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3783465736/001:/mount1 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                 |                   |         |         |                     |                     |
	| ssh       | functional-581506 ssh findmnt                                          | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC |                     |
	|           | -T /mount1                                                             |                   |         |         |                     |                     |
	| mount     | -p functional-581506                                                   | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3783465736/001:/mount3 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                 |                   |         |         |                     |                     |
	| mount     | -p functional-581506                                                   | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3783465736/001:/mount2 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                 |                   |         |         |                     |                     |
	| ssh       | functional-581506 ssh findmnt                                          | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC | 10 May 25 18:16 UTC |
	|           | -T /mount1                                                             |                   |         |         |                     |                     |
	| ssh       | functional-581506 ssh findmnt                                          | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC | 10 May 25 18:16 UTC |
	|           | -T /mount2                                                             |                   |         |         |                     |                     |
	| ssh       | functional-581506 ssh findmnt                                          | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC | 10 May 25 18:16 UTC |
	|           | -T /mount3                                                             |                   |         |         |                     |                     |
	| mount     | -p functional-581506                                                   | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC |                     |
	|           | --kill=true                                                            |                   |         |         |                     |                     |
	| start     | -p functional-581506                                                   | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC |                     |
	|           | --dry-run --memory                                                     |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                |                   |         |         |                     |                     |
	|           | --driver=kvm2                                                          |                   |         |         |                     |                     |
	|           | --container-runtime=crio                                               |                   |         |         |                     |                     |
	| start     | -p functional-581506                                                   | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC |                     |
	|           | --dry-run --alsologtostderr                                            |                   |         |         |                     |                     |
	|           | -v=1 --driver=kvm2                                                     |                   |         |         |                     |                     |
	|           | --container-runtime=crio                                               |                   |         |         |                     |                     |
	| start     | -p functional-581506                                                   | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC |                     |
	|           | --dry-run --memory                                                     |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                |                   |         |         |                     |                     |
	|           | --driver=kvm2                                                          |                   |         |         |                     |                     |
	|           | --container-runtime=crio                                               |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                     | functional-581506 | jenkins | v1.35.0 | 10 May 25 18:16 UTC |                     |
	|           | -p functional-581506                                                   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                 |                   |         |         |                     |                     |
	|-----------|------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/05/10 18:16:06
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0510 18:16:06.249127  407218 out.go:345] Setting OutFile to fd 1 ...
	I0510 18:16:06.249230  407218 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 18:16:06.249245  407218 out.go:358] Setting ErrFile to fd 2...
	I0510 18:16:06.249249  407218 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 18:16:06.249538  407218 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-388787/.minikube/bin
	I0510 18:16:06.250056  407218 out.go:352] Setting JSON to false
	I0510 18:16:06.250986  407218 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":28714,"bootTime":1746872252,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 18:16:06.251048  407218 start.go:140] virtualization: kvm guest
	I0510 18:16:06.252905  407218 out.go:177] * [functional-581506] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0510 18:16:06.254379  407218 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 18:16:06.254378  407218 notify.go:220] Checking for updates...
	I0510 18:16:06.255877  407218 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 18:16:06.257250  407218 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-388787/kubeconfig
	I0510 18:16:06.258440  407218 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-388787/.minikube
	I0510 18:16:06.259843  407218 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 18:16:06.261024  407218 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 18:16:06.262455  407218 config.go:182] Loaded profile config "functional-581506": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 18:16:06.262923  407218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 18:16:06.263004  407218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:16:06.279063  407218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38601
	I0510 18:16:06.279680  407218 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:16:06.280465  407218 main.go:141] libmachine: Using API Version  1
	I0510 18:16:06.280504  407218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:16:06.280895  407218 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:16:06.281110  407218 main.go:141] libmachine: (functional-581506) Calling .DriverName
	I0510 18:16:06.281407  407218 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 18:16:06.281717  407218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 18:16:06.281756  407218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:16:06.297201  407218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44999
	I0510 18:16:06.297734  407218 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:16:06.298367  407218 main.go:141] libmachine: Using API Version  1
	I0510 18:16:06.298396  407218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:16:06.298758  407218 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:16:06.298967  407218 main.go:141] libmachine: (functional-581506) Calling .DriverName
	I0510 18:16:06.333465  407218 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0510 18:16:06.334614  407218 start.go:304] selected driver: kvm2
	I0510 18:16:06.334628  407218 start.go:908] validating driver "kvm2" against &{Name:functional-581506 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.33.0 ClusterName:functional-581506 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8441 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 18:16:06.334724  407218 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 18:16:06.336620  407218 out.go:201] 
	W0510 18:16:06.337727  407218 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0510 18:16:06.338871  407218 out.go:201] 
	
	
	==> CRI-O <==
	May 10 18:18:49 functional-581506 crio[5891]: time="2025-05-10 18:18:49.772574073Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bc59d73bfa2bf2b1a39e797a7d2b573e644354a5079881f3dd26cec1c252aba,PodSandboxId:ac1fe88b05f85fd3070ec6db14c318d8b19cd922062770aa6fc6b88cf2bc0f14,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1746900274095849002,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-t4rcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1c5c10-5db3-43e0-935a-0549799273f3,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca40d958630336ad5282e3e644a344eb6222b09f601d44b816dbc17429e58924,PodSandboxId:0c91495cd04f27933de8b107c48b7ad6314a49c58ac7a22c6acb1832e85de258,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,State:CONTAINER_RUNNING,CreatedAt:1746900270663661287,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-581506,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 65ecf0b12922dcc8259f7d51baab7e18,},Annotations:map[string]string{io.kubernetes.container.hash: 2e2dc675,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206d421221f482411c4e5a5ef3f7102eccd8b38f07c242446855962f9958f985,PodSandboxId:a71305dd0a11cb4fec07b8ecece405b394e529aed22658f28110cc632eb39534,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1746900267571783633,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: ea7d9372-7c9e-444b-a628-0dfc4003f07d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bad506e8de60f9ec83d122523ca19234a72175234ffd3433d02684eb651ce9d,PodSandboxId:ad6bf190d55676203ab65df23981cd676ca08ed2bc2eef1dd05517d694c7e66e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_RUNNING,CreatedAt:1746900267584333315,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-581506,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: b2dc81ade1bbda73868f61223889f8f4,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1002d7979feaa7a0860a8934e8992ae4fdc369b64f2a34d3a93bf01f4e8015e3,PodSandboxId:1713a07d44b66f7d807e2bd691e25e7ecdd6e7c5d84c1261729e464047a1a031,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1746900267652578505,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-581506,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe62e874
0903c7a0badf385e7524512e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c908b9ef4e2dd4afa4d8c8077af1366569126a578de927d95d14f07813040bab,PodSandboxId:da31c3a5af7bf008afa7c113669c143c8daf56d21cd077d4cf6dc85664b412de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_RUNNING,CreatedAt:1746900267384004977,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sxk9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f4ab1a-93f7-4c1e-bcbe-5f9c9daaae46,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4eccacbeea6a58cc9c575f2c2bf5f8297029f9c9d2a9264bcf3e69644b4c28,PodSandboxId:e49ea2b58308c6c0b9b2908ae1ab6a5818f361d3a75849eac0ab8eb63fab41ca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_EXITED,CreatedAt:1746900143518984605,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sxk9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f4ab1a-93f7-4c1e-bcbe-5f9c9daaae46,},Annotations:map[string]string{io.kubernet
es.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ddf6914642a098d580c48db641460c4197df74a06bf7008e362f610f185934d,PodSandboxId:00c4138d2ab0d3a6880991ae6ca2f7c7e3c2de33b60a469043a91f7f8adef12d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1746900143498396555,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7d9372-7c9e-444b-a628-0dfc4003f07d,},Annotations:map[string]string{io.kubernetes.container.
hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67bef24b725ebf7a2b7f343d7456516d6b5de38118f9cf48e7d70d9146ce2087,PodSandboxId:e30af250008246b61b90a3718d1c328f2984559c29b8526e0386129454a98b4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1746900143526731809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-t4rcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1c5c10-5db3-43e0-935a-0549799273f3,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.con
tainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5879bea6c3a25517766471c3eec758ce0c6d853db7055e1f3505263a674ed969,PodSandboxId:2cc3ee9d3458fbdf619a3c176b445eff63eefe6d42ab071484b6ca448013de07,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1746900136904565424,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-581506,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe62e8740903c7a0badf385e7524512e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74fd0b7de642965eb7e03cf324017cb2195034685758e46efbd5e6997aba9ae5,PodSandboxId:45aa7f96fbe49dd74e9cdfcc97884ce5caba88b39b6e9b00f2357661ecbba1a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_EXITED,CreatedAt:1746900136908093042,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functiona
l-581506,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2dc81ade1bbda73868f61223889f8f4,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc42d63e6220a437de1d056d765ed97df2e6978798401b10283f61c7b1bc895b,PodSandboxId:6ed00def2c968d5a51634c7dafc6e6cc749b20e361a2365659842d41ca79ff9c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,State:CONTAINER_EXITED,CreatedAt:1746900136856417357,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-581506,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a886f34999ac0d6b56a638cab77f640,},Annotations:map[string]string{io.kubernetes.container.hash: fd54b99d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f41feb9d-1755-4a0e-a691-7d1afc2bf5ad name=/runtime.v1.RuntimeService/ListContainers
	May 10 18:18:49 functional-581506 crio[5891]: time="2025-05-10 18:18:49.825177037Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5650e323-457c-4db5-9227-5332db8273d3 name=/runtime.v1.RuntimeService/ListPodSandbox
	May 10 18:18:49 functional-581506 crio[5891]: time="2025-05-10 18:18:49.825370497Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:0c91495cd04f27933de8b107c48b7ad6314a49c58ac7a22c6acb1832e85de258,Metadata:&PodSandboxMetadata{Name:kube-apiserver-functional-581506,Uid:65ecf0b12922dcc8259f7d51baab7e18,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1746900270473149926,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-functional-581506,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65ecf0b12922dcc8259f7d51baab7e18,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.52:8441,kubernetes.io/config.hash: 65ecf0b12922dcc8259f7d51baab7e18,kubernetes.io/config.seen: 2025-05-10T18:04:29.772788219Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ac1fe88b05f85fd3070ec6db14c318
d8b19cd922062770aa6fc6b88cf2bc0f14,Metadata:&PodSandboxMetadata{Name:coredns-674b8bbfcf-t4rcv,Uid:0b1c5c10-5db3-43e0-935a-0549799273f3,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1746900267590088854,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-674b8bbfcf-t4rcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1c5c10-5db3-43e0-935a-0549799273f3,k8s-app: kube-dns,pod-template-hash: 674b8bbfcf,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-05-10T18:02:23.180950648Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a71305dd0a11cb4fec07b8ecece405b394e529aed22658f28110cc632eb39534,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:ea7d9372-7c9e-444b-a628-0dfc4003f07d,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1746900267091837850,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kub
ernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7d9372-7c9e-444b-a628-0dfc4003f07d,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-05-10T18:02:23.180949404Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1713a07d44b66f7
d807e2bd691e25e7ecdd6e7c5d84c1261729e464047a1a031,Metadata:&PodSandboxMetadata{Name:etcd-functional-581506,Uid:fe62e8740903c7a0badf385e7524512e,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1746900267085263377,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-581506,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe62e8740903c7a0badf385e7524512e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.52:2379,kubernetes.io/config.hash: fe62e8740903c7a0badf385e7524512e,kubernetes.io/config.seen: 2025-05-10T18:02:19.175016134Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ad6bf190d55676203ab65df23981cd676ca08ed2bc2eef1dd05517d694c7e66e,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-581506,Uid:b2dc81ade1bbda73868f61223889f8f4,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1746900267016647905
,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-581506,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2dc81ade1bbda73868f61223889f8f4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b2dc81ade1bbda73868f61223889f8f4,kubernetes.io/config.seen: 2025-05-10T18:02:19.175011542Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:da31c3a5af7bf008afa7c113669c143c8daf56d21cd077d4cf6dc85664b412de,Metadata:&PodSandboxMetadata{Name:kube-proxy-sxk9c,Uid:c3f4ab1a-93f7-4c1e-bcbe-5f9c9daaae46,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1746900266988634181,Labels:map[string]string{controller-revision-hash: 7b75d89869,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-sxk9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f4ab1a-93f7-4c1e-bcbe-5f9c9daaae46,k8s-app: kube-proxy,pod-template-generation: 1,}
,Annotations:map[string]string{kubernetes.io/config.seen: 2025-05-10T18:02:23.180940195Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=5650e323-457c-4db5-9227-5332db8273d3 name=/runtime.v1.RuntimeService/ListPodSandbox
	May 10 18:18:49 functional-581506 crio[5891]: time="2025-05-10 18:18:49.826208711Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=084fd886-91e5-407f-8500-39c56a00fdfa name=/runtime.v1.RuntimeService/ListContainers
	May 10 18:18:49 functional-581506 crio[5891]: time="2025-05-10 18:18:49.826327318Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=084fd886-91e5-407f-8500-39c56a00fdfa name=/runtime.v1.RuntimeService/ListContainers
	May 10 18:18:49 functional-581506 crio[5891]: time="2025-05-10 18:18:49.826465041Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bc59d73bfa2bf2b1a39e797a7d2b573e644354a5079881f3dd26cec1c252aba,PodSandboxId:ac1fe88b05f85fd3070ec6db14c318d8b19cd922062770aa6fc6b88cf2bc0f14,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1746900274095849002,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-t4rcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1c5c10-5db3-43e0-935a-0549799273f3,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca40d958630336ad5282e3e644a344eb6222b09f601d44b816dbc17429e58924,PodSandboxId:0c91495cd04f27933de8b107c48b7ad6314a49c58ac7a22c6acb1832e85de258,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,State:CONTAINER_RUNNING,CreatedAt:1746900270663661287,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-581506,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 65ecf0b12922dcc8259f7d51baab7e18,},Annotations:map[string]string{io.kubernetes.container.hash: 2e2dc675,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206d421221f482411c4e5a5ef3f7102eccd8b38f07c242446855962f9958f985,PodSandboxId:a71305dd0a11cb4fec07b8ecece405b394e529aed22658f28110cc632eb39534,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1746900267571783633,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: ea7d9372-7c9e-444b-a628-0dfc4003f07d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bad506e8de60f9ec83d122523ca19234a72175234ffd3433d02684eb651ce9d,PodSandboxId:ad6bf190d55676203ab65df23981cd676ca08ed2bc2eef1dd05517d694c7e66e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_RUNNING,CreatedAt:1746900267584333315,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-581506,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: b2dc81ade1bbda73868f61223889f8f4,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1002d7979feaa7a0860a8934e8992ae4fdc369b64f2a34d3a93bf01f4e8015e3,PodSandboxId:1713a07d44b66f7d807e2bd691e25e7ecdd6e7c5d84c1261729e464047a1a031,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1746900267652578505,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-581506,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe62e874
0903c7a0badf385e7524512e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c908b9ef4e2dd4afa4d8c8077af1366569126a578de927d95d14f07813040bab,PodSandboxId:da31c3a5af7bf008afa7c113669c143c8daf56d21cd077d4cf6dc85664b412de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_RUNNING,CreatedAt:1746900267384004977,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sxk9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f4ab1a-93f7-4c1e-bcbe-5f9c9daaae46,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=084fd886-91e5-407f-8500-39c56a00fdfa name=/runtime.v1.RuntimeService/ListContainers
	May 10 18:18:49 functional-581506 crio[5891]: time="2025-05-10 18:18:49.831056675Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bc17f523-749d-4829-a952-04f23562ea0c name=/runtime.v1.RuntimeService/Version
	May 10 18:18:49 functional-581506 crio[5891]: time="2025-05-10 18:18:49.831140850Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bc17f523-749d-4829-a952-04f23562ea0c name=/runtime.v1.RuntimeService/Version
	May 10 18:18:49 functional-581506 crio[5891]: time="2025-05-10 18:18:49.833270012Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=51c106de-89c7-42c3-b9dd-9faeae0b033b name=/runtime.v1.ImageService/ImageFsInfo
	May 10 18:18:49 functional-581506 crio[5891]: time="2025-05-10 18:18:49.833795930Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746901129833773691,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:165066,},InodesUsed:&UInt64Value{Value:82,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=51c106de-89c7-42c3-b9dd-9faeae0b033b name=/runtime.v1.ImageService/ImageFsInfo
	May 10 18:18:49 functional-581506 crio[5891]: time="2025-05-10 18:18:49.834545894Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2747564d-fa8d-4122-92a4-82946311a0e9 name=/runtime.v1.RuntimeService/ListContainers
	May 10 18:18:49 functional-581506 crio[5891]: time="2025-05-10 18:18:49.834622831Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2747564d-fa8d-4122-92a4-82946311a0e9 name=/runtime.v1.RuntimeService/ListContainers
	May 10 18:18:49 functional-581506 crio[5891]: time="2025-05-10 18:18:49.835537038Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bc59d73bfa2bf2b1a39e797a7d2b573e644354a5079881f3dd26cec1c252aba,PodSandboxId:ac1fe88b05f85fd3070ec6db14c318d8b19cd922062770aa6fc6b88cf2bc0f14,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1746900274095849002,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-t4rcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1c5c10-5db3-43e0-935a-0549799273f3,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca40d958630336ad5282e3e644a344eb6222b09f601d44b816dbc17429e58924,PodSandboxId:0c91495cd04f27933de8b107c48b7ad6314a49c58ac7a22c6acb1832e85de258,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,State:CONTAINER_RUNNING,CreatedAt:1746900270663661287,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-581506,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 65ecf0b12922dcc8259f7d51baab7e18,},Annotations:map[string]string{io.kubernetes.container.hash: 2e2dc675,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206d421221f482411c4e5a5ef3f7102eccd8b38f07c242446855962f9958f985,PodSandboxId:a71305dd0a11cb4fec07b8ecece405b394e529aed22658f28110cc632eb39534,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1746900267571783633,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: ea7d9372-7c9e-444b-a628-0dfc4003f07d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bad506e8de60f9ec83d122523ca19234a72175234ffd3433d02684eb651ce9d,PodSandboxId:ad6bf190d55676203ab65df23981cd676ca08ed2bc2eef1dd05517d694c7e66e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_RUNNING,CreatedAt:1746900267584333315,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-581506,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: b2dc81ade1bbda73868f61223889f8f4,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1002d7979feaa7a0860a8934e8992ae4fdc369b64f2a34d3a93bf01f4e8015e3,PodSandboxId:1713a07d44b66f7d807e2bd691e25e7ecdd6e7c5d84c1261729e464047a1a031,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1746900267652578505,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-581506,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe62e874
0903c7a0badf385e7524512e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c908b9ef4e2dd4afa4d8c8077af1366569126a578de927d95d14f07813040bab,PodSandboxId:da31c3a5af7bf008afa7c113669c143c8daf56d21cd077d4cf6dc85664b412de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_RUNNING,CreatedAt:1746900267384004977,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sxk9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f4ab1a-93f7-4c1e-bcbe-5f9c9daaae46,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4eccacbeea6a58cc9c575f2c2bf5f8297029f9c9d2a9264bcf3e69644b4c28,PodSandboxId:e49ea2b58308c6c0b9b2908ae1ab6a5818f361d3a75849eac0ab8eb63fab41ca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_EXITED,CreatedAt:1746900143518984605,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sxk9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f4ab1a-93f7-4c1e-bcbe-5f9c9daaae46,},Annotations:map[string]string{io.kubernet
es.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ddf6914642a098d580c48db641460c4197df74a06bf7008e362f610f185934d,PodSandboxId:00c4138d2ab0d3a6880991ae6ca2f7c7e3c2de33b60a469043a91f7f8adef12d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1746900143498396555,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7d9372-7c9e-444b-a628-0dfc4003f07d,},Annotations:map[string]string{io.kubernetes.container.
hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67bef24b725ebf7a2b7f343d7456516d6b5de38118f9cf48e7d70d9146ce2087,PodSandboxId:e30af250008246b61b90a3718d1c328f2984559c29b8526e0386129454a98b4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1746900143526731809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-t4rcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1c5c10-5db3-43e0-935a-0549799273f3,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.con
tainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5879bea6c3a25517766471c3eec758ce0c6d853db7055e1f3505263a674ed969,PodSandboxId:2cc3ee9d3458fbdf619a3c176b445eff63eefe6d42ab071484b6ca448013de07,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1746900136904565424,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-581506,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe62e8740903c7a0badf385e7524512e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74fd0b7de642965eb7e03cf324017cb2195034685758e46efbd5e6997aba9ae5,PodSandboxId:45aa7f96fbe49dd74e9cdfcc97884ce5caba88b39b6e9b00f2357661ecbba1a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_EXITED,CreatedAt:1746900136908093042,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functiona
l-581506,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2dc81ade1bbda73868f61223889f8f4,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc42d63e6220a437de1d056d765ed97df2e6978798401b10283f61c7b1bc895b,PodSandboxId:6ed00def2c968d5a51634c7dafc6e6cc749b20e361a2365659842d41ca79ff9c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,State:CONTAINER_EXITED,CreatedAt:1746900136856417357,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-581506,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a886f34999ac0d6b56a638cab77f640,},Annotations:map[string]string{io.kubernetes.container.hash: fd54b99d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2747564d-fa8d-4122-92a4-82946311a0e9 name=/runtime.v1.RuntimeService/ListContainers
	May 10 18:18:49 functional-581506 crio[5891]: time="2025-05-10 18:18:49.842788959Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=3095d204-0b7f-4300-a8b8-7c73a05a262f name=/runtime.v1.RuntimeService/ListPodSandbox
	May 10 18:18:49 functional-581506 crio[5891]: time="2025-05-10 18:18:49.843274870Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:0c91495cd04f27933de8b107c48b7ad6314a49c58ac7a22c6acb1832e85de258,Metadata:&PodSandboxMetadata{Name:kube-apiserver-functional-581506,Uid:65ecf0b12922dcc8259f7d51baab7e18,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1746900270473149926,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-functional-581506,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65ecf0b12922dcc8259f7d51baab7e18,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.52:8441,kubernetes.io/config.hash: 65ecf0b12922dcc8259f7d51baab7e18,kubernetes.io/config.seen: 2025-05-10T18:04:29.772788219Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ac1fe88b05f85fd3070ec6db14c318
d8b19cd922062770aa6fc6b88cf2bc0f14,Metadata:&PodSandboxMetadata{Name:coredns-674b8bbfcf-t4rcv,Uid:0b1c5c10-5db3-43e0-935a-0549799273f3,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1746900267590088854,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-674b8bbfcf-t4rcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1c5c10-5db3-43e0-935a-0549799273f3,k8s-app: kube-dns,pod-template-hash: 674b8bbfcf,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-05-10T18:02:23.180950648Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a71305dd0a11cb4fec07b8ecece405b394e529aed22658f28110cc632eb39534,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:ea7d9372-7c9e-444b-a628-0dfc4003f07d,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1746900267091837850,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kub
ernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7d9372-7c9e-444b-a628-0dfc4003f07d,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-05-10T18:02:23.180949404Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1713a07d44b66f7
d807e2bd691e25e7ecdd6e7c5d84c1261729e464047a1a031,Metadata:&PodSandboxMetadata{Name:etcd-functional-581506,Uid:fe62e8740903c7a0badf385e7524512e,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1746900267085263377,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-581506,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe62e8740903c7a0badf385e7524512e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.52:2379,kubernetes.io/config.hash: fe62e8740903c7a0badf385e7524512e,kubernetes.io/config.seen: 2025-05-10T18:02:19.175016134Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ad6bf190d55676203ab65df23981cd676ca08ed2bc2eef1dd05517d694c7e66e,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-581506,Uid:b2dc81ade1bbda73868f61223889f8f4,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1746900267016647905
,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-581506,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2dc81ade1bbda73868f61223889f8f4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b2dc81ade1bbda73868f61223889f8f4,kubernetes.io/config.seen: 2025-05-10T18:02:19.175011542Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:da31c3a5af7bf008afa7c113669c143c8daf56d21cd077d4cf6dc85664b412de,Metadata:&PodSandboxMetadata{Name:kube-proxy-sxk9c,Uid:c3f4ab1a-93f7-4c1e-bcbe-5f9c9daaae46,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1746900266988634181,Labels:map[string]string{controller-revision-hash: 7b75d89869,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-sxk9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f4ab1a-93f7-4c1e-bcbe-5f9c9daaae46,k8s-app: kube-proxy,pod-template-generation: 1,}
,Annotations:map[string]string{kubernetes.io/config.seen: 2025-05-10T18:02:23.180940195Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e30af250008246b61b90a3718d1c328f2984559c29b8526e0386129454a98b4a,Metadata:&PodSandboxMetadata{Name:coredns-674b8bbfcf-t4rcv,Uid:0b1c5c10-5db3-43e0-935a-0549799273f3,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1746900136803598440,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-674b8bbfcf-t4rcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1c5c10-5db3-43e0-935a-0549799273f3,k8s-app: kube-dns,pod-template-hash: 674b8bbfcf,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-05-10T18:01:19.899298036Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2cc3ee9d3458fbdf619a3c176b445eff63eefe6d42ab071484b6ca448013de07,Metadata:&PodSandboxMetadata{Name:etcd-functional-581506,Uid:fe62e8740903c7a0badf385e7524512e,Namespace:kube-system,Attempt:1,},State:S
ANDBOX_NOTREADY,CreatedAt:1746900136435470891,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-581506,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe62e8740903c7a0badf385e7524512e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.52:2379,kubernetes.io/config.hash: fe62e8740903c7a0badf385e7524512e,kubernetes.io/config.seen: 2025-05-10T18:01:14.852549120Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6ed00def2c968d5a51634c7dafc6e6cc749b20e361a2365659842d41ca79ff9c,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-581506,Uid:0a886f34999ac0d6b56a638cab77f640,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1746900136430362120,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-581506,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 0a886f34999ac0d6b56a638cab77f640,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0a886f34999ac0d6b56a638cab77f640,kubernetes.io/config.seen: 2025-05-10T18:01:14.852555099Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e49ea2b58308c6c0b9b2908ae1ab6a5818f361d3a75849eac0ab8eb63fab41ca,Metadata:&PodSandboxMetadata{Name:kube-proxy-sxk9c,Uid:c3f4ab1a-93f7-4c1e-bcbe-5f9c9daaae46,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1746900136424737097,Labels:map[string]string{controller-revision-hash: 7b75d89869,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-sxk9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f4ab1a-93f7-4c1e-bcbe-5f9c9daaae46,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-05-10T18:01:19.752209915Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:45aa7f96fbe49dd74e9cdfcc97884ce5caba88b39b6e9b
00f2357661ecbba1a3,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-581506,Uid:b2dc81ade1bbda73868f61223889f8f4,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1746900136360595349,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-581506,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2dc81ade1bbda73868f61223889f8f4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b2dc81ade1bbda73868f61223889f8f4,kubernetes.io/config.seen: 2025-05-10T18:01:14.852553896Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:00c4138d2ab0d3a6880991ae6ca2f7c7e3c2de33b60a469043a91f7f8adef12d,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:ea7d9372-7c9e-444b-a628-0dfc4003f07d,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1746900136287266774,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reco
ncile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7d9372-7c9e-444b-a628-0dfc4003f07d,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-05-10T18:01:20.683909970Z
,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=3095d204-0b7f-4300-a8b8-7c73a05a262f name=/runtime.v1.RuntimeService/ListPodSandbox
	May 10 18:18:49 functional-581506 crio[5891]: time="2025-05-10 18:18:49.843767847Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2d62fc1f-7ba7-461a-b8e5-f36d2332f107 name=/runtime.v1.RuntimeService/ListContainers
	May 10 18:18:49 functional-581506 crio[5891]: time="2025-05-10 18:18:49.843840663Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2d62fc1f-7ba7-461a-b8e5-f36d2332f107 name=/runtime.v1.RuntimeService/ListContainers
	May 10 18:18:49 functional-581506 crio[5891]: time="2025-05-10 18:18:49.844143845Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bc59d73bfa2bf2b1a39e797a7d2b573e644354a5079881f3dd26cec1c252aba,PodSandboxId:ac1fe88b05f85fd3070ec6db14c318d8b19cd922062770aa6fc6b88cf2bc0f14,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1746900274095849002,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-t4rcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1c5c10-5db3-43e0-935a-0549799273f3,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca40d958630336ad5282e3e644a344eb6222b09f601d44b816dbc17429e58924,PodSandboxId:0c91495cd04f27933de8b107c48b7ad6314a49c58ac7a22c6acb1832e85de258,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,State:CONTAINER_RUNNING,CreatedAt:1746900270663661287,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-581506,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 65ecf0b12922dcc8259f7d51baab7e18,},Annotations:map[string]string{io.kubernetes.container.hash: 2e2dc675,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206d421221f482411c4e5a5ef3f7102eccd8b38f07c242446855962f9958f985,PodSandboxId:a71305dd0a11cb4fec07b8ecece405b394e529aed22658f28110cc632eb39534,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1746900267571783633,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: ea7d9372-7c9e-444b-a628-0dfc4003f07d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bad506e8de60f9ec83d122523ca19234a72175234ffd3433d02684eb651ce9d,PodSandboxId:ad6bf190d55676203ab65df23981cd676ca08ed2bc2eef1dd05517d694c7e66e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_RUNNING,CreatedAt:1746900267584333315,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-581506,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: b2dc81ade1bbda73868f61223889f8f4,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1002d7979feaa7a0860a8934e8992ae4fdc369b64f2a34d3a93bf01f4e8015e3,PodSandboxId:1713a07d44b66f7d807e2bd691e25e7ecdd6e7c5d84c1261729e464047a1a031,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1746900267652578505,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-581506,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe62e874
0903c7a0badf385e7524512e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c908b9ef4e2dd4afa4d8c8077af1366569126a578de927d95d14f07813040bab,PodSandboxId:da31c3a5af7bf008afa7c113669c143c8daf56d21cd077d4cf6dc85664b412de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_RUNNING,CreatedAt:1746900267384004977,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sxk9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f4ab1a-93f7-4c1e-bcbe-5f9c9daaae46,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4eccacbeea6a58cc9c575f2c2bf5f8297029f9c9d2a9264bcf3e69644b4c28,PodSandboxId:e49ea2b58308c6c0b9b2908ae1ab6a5818f361d3a75849eac0ab8eb63fab41ca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_EXITED,CreatedAt:1746900143518984605,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sxk9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f4ab1a-93f7-4c1e-bcbe-5f9c9daaae46,},Annotations:map[string]string{io.kubernet
es.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ddf6914642a098d580c48db641460c4197df74a06bf7008e362f610f185934d,PodSandboxId:00c4138d2ab0d3a6880991ae6ca2f7c7e3c2de33b60a469043a91f7f8adef12d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1746900143498396555,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7d9372-7c9e-444b-a628-0dfc4003f07d,},Annotations:map[string]string{io.kubernetes.container.
hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67bef24b725ebf7a2b7f343d7456516d6b5de38118f9cf48e7d70d9146ce2087,PodSandboxId:e30af250008246b61b90a3718d1c328f2984559c29b8526e0386129454a98b4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1746900143526731809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-t4rcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1c5c10-5db3-43e0-935a-0549799273f3,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.con
tainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5879bea6c3a25517766471c3eec758ce0c6d853db7055e1f3505263a674ed969,PodSandboxId:2cc3ee9d3458fbdf619a3c176b445eff63eefe6d42ab071484b6ca448013de07,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1746900136904565424,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-581506,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe62e8740903c7a0badf385e7524512e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74fd0b7de642965eb7e03cf324017cb2195034685758e46efbd5e6997aba9ae5,PodSandboxId:45aa7f96fbe49dd74e9cdfcc97884ce5caba88b39b6e9b00f2357661ecbba1a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_EXITED,CreatedAt:1746900136908093042,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functiona
l-581506,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2dc81ade1bbda73868f61223889f8f4,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc42d63e6220a437de1d056d765ed97df2e6978798401b10283f61c7b1bc895b,PodSandboxId:6ed00def2c968d5a51634c7dafc6e6cc749b20e361a2365659842d41ca79ff9c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,State:CONTAINER_EXITED,CreatedAt:1746900136856417357,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-581506,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a886f34999ac0d6b56a638cab77f640,},Annotations:map[string]string{io.kubernetes.container.hash: fd54b99d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2d62fc1f-7ba7-461a-b8e5-f36d2332f107 name=/runtime.v1.RuntimeService/ListContainers
	May 10 18:18:49 functional-581506 crio[5891]: time="2025-05-10 18:18:49.887458754Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f02ee6d2-79e3-460c-972d-a3ccd7db3558 name=/runtime.v1.RuntimeService/Version
	May 10 18:18:49 functional-581506 crio[5891]: time="2025-05-10 18:18:49.887552935Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f02ee6d2-79e3-460c-972d-a3ccd7db3558 name=/runtime.v1.RuntimeService/Version
	May 10 18:18:49 functional-581506 crio[5891]: time="2025-05-10 18:18:49.889315478Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b376bcdb-7728-483e-9856-e1b16da5a862 name=/runtime.v1.ImageService/ImageFsInfo
	May 10 18:18:49 functional-581506 crio[5891]: time="2025-05-10 18:18:49.889822165Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746901129889799457,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:165066,},InodesUsed:&UInt64Value{Value:82,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b376bcdb-7728-483e-9856-e1b16da5a862 name=/runtime.v1.ImageService/ImageFsInfo
	May 10 18:18:49 functional-581506 crio[5891]: time="2025-05-10 18:18:49.890646983Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9a2b1012-88ab-4bd2-a7c3-4512e4583658 name=/runtime.v1.RuntimeService/ListContainers
	May 10 18:18:49 functional-581506 crio[5891]: time="2025-05-10 18:18:49.890719908Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9a2b1012-88ab-4bd2-a7c3-4512e4583658 name=/runtime.v1.RuntimeService/ListContainers
	May 10 18:18:49 functional-581506 crio[5891]: time="2025-05-10 18:18:49.891045240Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bc59d73bfa2bf2b1a39e797a7d2b573e644354a5079881f3dd26cec1c252aba,PodSandboxId:ac1fe88b05f85fd3070ec6db14c318d8b19cd922062770aa6fc6b88cf2bc0f14,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1746900274095849002,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-t4rcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1c5c10-5db3-43e0-935a-0549799273f3,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca40d958630336ad5282e3e644a344eb6222b09f601d44b816dbc17429e58924,PodSandboxId:0c91495cd04f27933de8b107c48b7ad6314a49c58ac7a22c6acb1832e85de258,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,State:CONTAINER_RUNNING,CreatedAt:1746900270663661287,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-581506,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 65ecf0b12922dcc8259f7d51baab7e18,},Annotations:map[string]string{io.kubernetes.container.hash: 2e2dc675,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206d421221f482411c4e5a5ef3f7102eccd8b38f07c242446855962f9958f985,PodSandboxId:a71305dd0a11cb4fec07b8ecece405b394e529aed22658f28110cc632eb39534,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1746900267571783633,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: ea7d9372-7c9e-444b-a628-0dfc4003f07d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bad506e8de60f9ec83d122523ca19234a72175234ffd3433d02684eb651ce9d,PodSandboxId:ad6bf190d55676203ab65df23981cd676ca08ed2bc2eef1dd05517d694c7e66e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_RUNNING,CreatedAt:1746900267584333315,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-581506,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: b2dc81ade1bbda73868f61223889f8f4,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1002d7979feaa7a0860a8934e8992ae4fdc369b64f2a34d3a93bf01f4e8015e3,PodSandboxId:1713a07d44b66f7d807e2bd691e25e7ecdd6e7c5d84c1261729e464047a1a031,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1746900267652578505,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-581506,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe62e874
0903c7a0badf385e7524512e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c908b9ef4e2dd4afa4d8c8077af1366569126a578de927d95d14f07813040bab,PodSandboxId:da31c3a5af7bf008afa7c113669c143c8daf56d21cd077d4cf6dc85664b412de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_RUNNING,CreatedAt:1746900267384004977,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sxk9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f4ab1a-93f7-4c1e-bcbe-5f9c9daaae46,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4eccacbeea6a58cc9c575f2c2bf5f8297029f9c9d2a9264bcf3e69644b4c28,PodSandboxId:e49ea2b58308c6c0b9b2908ae1ab6a5818f361d3a75849eac0ab8eb63fab41ca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_EXITED,CreatedAt:1746900143518984605,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sxk9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f4ab1a-93f7-4c1e-bcbe-5f9c9daaae46,},Annotations:map[string]string{io.kubernet
es.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ddf6914642a098d580c48db641460c4197df74a06bf7008e362f610f185934d,PodSandboxId:00c4138d2ab0d3a6880991ae6ca2f7c7e3c2de33b60a469043a91f7f8adef12d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1746900143498396555,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7d9372-7c9e-444b-a628-0dfc4003f07d,},Annotations:map[string]string{io.kubernetes.container.
hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67bef24b725ebf7a2b7f343d7456516d6b5de38118f9cf48e7d70d9146ce2087,PodSandboxId:e30af250008246b61b90a3718d1c328f2984559c29b8526e0386129454a98b4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1746900143526731809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-t4rcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1c5c10-5db3-43e0-935a-0549799273f3,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.con
tainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5879bea6c3a25517766471c3eec758ce0c6d853db7055e1f3505263a674ed969,PodSandboxId:2cc3ee9d3458fbdf619a3c176b445eff63eefe6d42ab071484b6ca448013de07,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1746900136904565424,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-581506,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe62e8740903c7a0badf385e7524512e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74fd0b7de642965eb7e03cf324017cb2195034685758e46efbd5e6997aba9ae5,PodSandboxId:45aa7f96fbe49dd74e9cdfcc97884ce5caba88b39b6e9b00f2357661ecbba1a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_EXITED,CreatedAt:1746900136908093042,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functiona
l-581506,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2dc81ade1bbda73868f61223889f8f4,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc42d63e6220a437de1d056d765ed97df2e6978798401b10283f61c7b1bc895b,PodSandboxId:6ed00def2c968d5a51634c7dafc6e6cc749b20e361a2365659842d41ca79ff9c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,State:CONTAINER_EXITED,CreatedAt:1746900136856417357,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-581506,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a886f34999ac0d6b56a638cab77f640,},Annotations:map[string]string{io.kubernetes.container.hash: fd54b99d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9a2b1012-88ab-4bd2-a7c3-4512e4583658 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2bc59d73bfa2b       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b   14 minutes ago      Running             coredns                   2                   ac1fe88b05f85       coredns-674b8bbfcf-t4rcv
	ca40d95863033       6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4   14 minutes ago      Running             kube-apiserver            0                   0c91495cd04f2       kube-apiserver-functional-581506
	1002d7979feaa       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1   14 minutes ago      Running             etcd                      2                   1713a07d44b66       etcd-functional-581506
	5bad506e8de60       1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02   14 minutes ago      Running             kube-controller-manager   2                   ad6bf190d5567       kube-controller-manager-functional-581506
	206d421221f48       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       4                   a71305dd0a11c       storage-provisioner
	c908b9ef4e2dd       f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68   14 minutes ago      Running             kube-proxy                2                   da31c3a5af7bf       kube-proxy-sxk9c
	67bef24b725eb       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b   16 minutes ago      Exited              coredns                   1                   e30af25000824       coredns-674b8bbfcf-t4rcv
	2b4eccacbeea6       f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68   16 minutes ago      Exited              kube-proxy                1                   e49ea2b58308c       kube-proxy-sxk9c
	9ddf6914642a0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Exited              storage-provisioner       3                   00c4138d2ab0d       storage-provisioner
	74fd0b7de6429       1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02   16 minutes ago      Exited              kube-controller-manager   1                   45aa7f96fbe49       kube-controller-manager-functional-581506
	5879bea6c3a25       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1   16 minutes ago      Exited              etcd                      1                   2cc3ee9d3458f       etcd-functional-581506
	bc42d63e6220a       8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4   16 minutes ago      Exited              kube-scheduler            1                   6ed00def2c968       kube-scheduler-functional-581506
	
	
	==> coredns [2bc59d73bfa2bf2b1a39e797a7d2b573e644354a5079881f3dd26cec1c252aba] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] 127.0.0.1:47540 - 49812 "HINFO IN 3817603910003911590.6949861336943334396. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.032959679s
	
	
	==> coredns [67bef24b725ebf7a2b7f343d7456516d6b5de38118f9cf48e7d70d9146ce2087] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] 127.0.0.1:59748 - 21482 "HINFO IN 2761340015405739266.7136990693185190550. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022015892s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-581506
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-581506
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e96c83983357cd8557f3cdfe077a25cc73d485a4
	                    minikube.k8s.io/name=functional-581506
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_05_10T18_01_15_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 May 2025 18:01:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-581506
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 May 2025 18:18:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 May 2025 18:14:13 +0000   Sat, 10 May 2025 18:01:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 May 2025 18:14:13 +0000   Sat, 10 May 2025 18:01:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 May 2025 18:14:13 +0000   Sat, 10 May 2025 18:01:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 May 2025 18:14:13 +0000   Sat, 10 May 2025 18:01:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.52
	  Hostname:    functional-581506
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912748Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912748Ki
	  pods:               110
	System Info:
	  Machine ID:                 78012ce40601437bb4c2db7efb9be33a
	  System UUID:                78012ce4-0601-437b-b4c2-db7efb9be33a
	  Boot ID:                    832a94bf-8db0-4adf-aef4-977728fcc1b7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2024.11.2
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.33.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-674b8bbfcf-t4rcv                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     17m
	  kube-system                 etcd-functional-581506                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         17m
	  kube-system                 kube-apiserver-functional-581506             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-functional-581506    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-sxk9c                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-functional-581506             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17m                kube-proxy       
	  Normal  Starting                 14m                kube-proxy       
	  Normal  Starting                 16m                kube-proxy       
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17m                kubelet          Node functional-581506 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m                kubelet          Node functional-581506 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m                kubelet          Node functional-581506 status is now: NodeHasSufficientPID
	  Normal  NodeReady                17m                kubelet          Node functional-581506 status is now: NodeReady
	  Normal  RegisteredNode           17m                node-controller  Node functional-581506 event: Registered Node functional-581506 in Controller
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node functional-581506 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node functional-581506 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node functional-581506 status is now: NodeHasSufficientPID
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           16m                node-controller  Node functional-581506 event: Registered Node functional-581506 in Controller
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node functional-581506 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node functional-581506 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node functional-581506 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node functional-581506 event: Registered Node functional-581506 in Controller
	
	
	==> dmesg <==
	[May10 18:00] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.000002] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.001507] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000567] (rpcbind)[143]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.143993] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000004] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.090211] kauditd_printk_skb: 1 callbacks suppressed
	[May10 18:01] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.148940] kauditd_printk_skb: 67 callbacks suppressed
	[  +0.675797] kauditd_printk_skb: 19 callbacks suppressed
	[ +10.795650] kauditd_printk_skb: 76 callbacks suppressed
	[ +20.864703] kauditd_printk_skb: 22 callbacks suppressed
	[May10 18:02] kauditd_printk_skb: 34 callbacks suppressed
	[  +4.648687] kauditd_printk_skb: 132 callbacks suppressed
	[  +5.789009] kauditd_printk_skb: 9 callbacks suppressed
	[ +13.341647] kauditd_printk_skb: 12 callbacks suppressed
	[May10 18:04] kauditd_printk_skb: 90 callbacks suppressed
	[  +1.054815] kauditd_printk_skb: 130 callbacks suppressed
	[  +0.904906] kauditd_printk_skb: 16 callbacks suppressed
	[May10 18:08] kauditd_printk_skb: 22 callbacks suppressed
	
	
	==> etcd [1002d7979feaa7a0860a8934e8992ae4fdc369b64f2a34d3a93bf01f4e8015e3] <==
	{"level":"info","ts":"2025-05-10T18:04:30.583609Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-05-10T18:04:30.584312Z","caller":"embed/etcd.go:762","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-05-10T18:04:30.584593Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"3baf479dc31b93a9","initial-advertise-peer-urls":["https://192.168.39.52:2380"],"listen-peer-urls":["https://192.168.39.52:2380"],"advertise-client-urls":["https://192.168.39.52:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.52:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-05-10T18:04:30.584641Z","caller":"embed/etcd.go:908","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-05-10T18:04:30.584794Z","caller":"embed/etcd.go:633","msg":"serving peer traffic","address":"192.168.39.52:2380"}
	{"level":"info","ts":"2025-05-10T18:04:30.584823Z","caller":"embed/etcd.go:603","msg":"cmux::serve","address":"192.168.39.52:2380"}
	{"level":"info","ts":"2025-05-10T18:04:31.534989Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 is starting a new election at term 3"}
	{"level":"info","ts":"2025-05-10T18:04:31.535049Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 became pre-candidate at term 3"}
	{"level":"info","ts":"2025-05-10T18:04:31.535080Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 received MsgPreVoteResp from 3baf479dc31b93a9 at term 3"}
	{"level":"info","ts":"2025-05-10T18:04:31.535099Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 became candidate at term 4"}
	{"level":"info","ts":"2025-05-10T18:04:31.535152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 received MsgVoteResp from 3baf479dc31b93a9 at term 4"}
	{"level":"info","ts":"2025-05-10T18:04:31.535163Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 became leader at term 4"}
	{"level":"info","ts":"2025-05-10T18:04:31.535174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3baf479dc31b93a9 elected leader 3baf479dc31b93a9 at term 4"}
	{"level":"info","ts":"2025-05-10T18:04:31.541728Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"3baf479dc31b93a9","local-member-attributes":"{Name:functional-581506 ClientURLs:[https://192.168.39.52:2379]}","request-path":"/0/members/3baf479dc31b93a9/attributes","cluster-id":"26c9414d925de00c","publish-timeout":"7s"}
	{"level":"info","ts":"2025-05-10T18:04:31.541955Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T18:04:31.542039Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T18:04:31.542717Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T18:04:31.544923Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-05-10T18:04:31.544977Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-05-10T18:04:31.545343Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T18:04:31.545926Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-05-10T18:04:31.550131Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.52:2379"}
	{"level":"info","ts":"2025-05-10T18:14:31.638045Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":969}
	{"level":"info","ts":"2025-05-10T18:14:31.648248Z","caller":"mvcc/kvstore_compaction.go:71","msg":"finished scheduled compaction","compact-revision":969,"took":"9.559967ms","hash":1876746662,"current-db-size-bytes":3153920,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":3153920,"current-db-size-in-use":"3.2 MB"}
	{"level":"info","ts":"2025-05-10T18:14:31.648346Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1876746662,"revision":969,"compact-revision":-1}
	
	
	==> etcd [5879bea6c3a25517766471c3eec758ce0c6d853db7055e1f3505263a674ed969] <==
	{"level":"info","ts":"2025-05-10T18:02:21.030061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-05-10T18:02:21.030106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 received MsgPreVoteResp from 3baf479dc31b93a9 at term 2"}
	{"level":"info","ts":"2025-05-10T18:02:21.030146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 became candidate at term 3"}
	{"level":"info","ts":"2025-05-10T18:02:21.030207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 received MsgVoteResp from 3baf479dc31b93a9 at term 3"}
	{"level":"info","ts":"2025-05-10T18:02:21.030228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 became leader at term 3"}
	{"level":"info","ts":"2025-05-10T18:02:21.030247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3baf479dc31b93a9 elected leader 3baf479dc31b93a9 at term 3"}
	{"level":"info","ts":"2025-05-10T18:02:21.038152Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"3baf479dc31b93a9","local-member-attributes":"{Name:functional-581506 ClientURLs:[https://192.168.39.52:2379]}","request-path":"/0/members/3baf479dc31b93a9/attributes","cluster-id":"26c9414d925de00c","publish-timeout":"7s"}
	{"level":"info","ts":"2025-05-10T18:02:21.038369Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T18:02:21.041197Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T18:02:21.041717Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T18:02:21.044437Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-05-10T18:02:21.044826Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T18:02:21.052743Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.52:2379"}
	{"level":"info","ts":"2025-05-10T18:02:21.064014Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-05-10T18:02:21.065946Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-05-10T18:02:47.594480Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-05-10T18:02:47.594538Z","caller":"embed/etcd.go:408","msg":"closing etcd server","name":"functional-581506","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.52:2380"],"advertise-client-urls":["https://192.168.39.52:2379"]}
	{"level":"warn","ts":"2025-05-10T18:02:47.692253Z","caller":"embed/serve.go:235","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.52:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-05-10T18:02:47.692414Z","caller":"embed/serve.go:237","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.52:2379: use of closed network connection"}
	{"level":"info","ts":"2025-05-10T18:02:47.692332Z","caller":"etcdserver/server.go:1546","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"3baf479dc31b93a9","current-leader-member-id":"3baf479dc31b93a9"}
	{"level":"warn","ts":"2025-05-10T18:02:47.692493Z","caller":"embed/serve.go:235","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-05-10T18:02:47.692590Z","caller":"embed/serve.go:237","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-05-10T18:02:47.696041Z","caller":"embed/etcd.go:613","msg":"stopping serving peer traffic","address":"192.168.39.52:2380"}
	{"level":"info","ts":"2025-05-10T18:02:47.696298Z","caller":"embed/etcd.go:618","msg":"stopped serving peer traffic","address":"192.168.39.52:2380"}
	{"level":"info","ts":"2025-05-10T18:02:47.696390Z","caller":"embed/etcd.go:410","msg":"closed etcd server","name":"functional-581506","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.52:2380"],"advertise-client-urls":["https://192.168.39.52:2379"]}
	
	
	==> kernel <==
	 18:18:50 up 18 min,  0 user,  load average: 0.18, 0.14, 0.11
	Linux functional-581506 5.10.207 #1 SMP Fri May 9 03:49:24 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2024.11.2"
	
	
	==> kube-apiserver [ca40d958630336ad5282e3e644a344eb6222b09f601d44b816dbc17429e58924] <==
	I0510 18:04:32.992976       1 shared_informer.go:357] "Caches are synced" controller="node_authorizer"
	I0510 18:04:33.822508       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0510 18:04:33.885749       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0510 18:04:35.065207       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0510 18:04:35.106821       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0510 18:04:35.136969       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0510 18:04:35.144671       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0510 18:04:36.229591       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0510 18:04:36.517419       1 controller.go:667] quota admission added evaluator for: endpoints
	I0510 18:04:36.581373       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 18:04:36.669214       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0510 18:08:43.916202       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 18:08:43.922038       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.104.141.104"}
	I0510 18:08:47.337692       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 18:08:48.351686       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.104.34.72"}
	I0510 18:08:48.356985       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 18:08:49.106605       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 18:08:49.110056       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.108.45.176"}
	I0510 18:08:54.502666       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 18:08:54.508618       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.228.11"}
	I0510 18:14:32.894709       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 18:16:07.436962       1 controller.go:667] quota admission added evaluator for: namespaces
	I0510 18:16:07.748671       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.114.28"}
	I0510 18:16:07.755311       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 18:16:07.788681       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.60.139"}
	
	
	==> kube-controller-manager [5bad506e8de60f9ec83d122523ca19234a72175234ffd3433d02684eb651ce9d] <==
	I0510 18:04:36.183471       1 shared_informer.go:357] "Caches are synced" controller="PVC protection"
	I0510 18:04:36.186114       1 shared_informer.go:357] "Caches are synced" controller="GC"
	I0510 18:04:36.188512       1 shared_informer.go:357] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0510 18:04:36.213570       1 shared_informer.go:357] "Caches are synced" controller="ephemeral"
	I0510 18:04:36.215649       1 shared_informer.go:357] "Caches are synced" controller="endpoint_slice_mirroring"
	I0510 18:04:36.228498       1 shared_informer.go:357] "Caches are synced" controller="ReplicaSet"
	I0510 18:04:36.233748       1 shared_informer.go:357] "Caches are synced" controller="stateful set"
	I0510 18:04:36.311659       1 shared_informer.go:357] "Caches are synced" controller="daemon sets"
	I0510 18:04:36.312725       1 shared_informer.go:357] "Caches are synced" controller="attach detach"
	I0510 18:04:36.380908       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 18:04:36.428129       1 shared_informer.go:357] "Caches are synced" controller="service account"
	I0510 18:04:36.472582       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 18:04:36.518918       1 shared_informer.go:357] "Caches are synced" controller="namespace"
	I0510 18:04:36.901758       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0510 18:04:36.901798       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0510 18:04:36.901804       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0510 18:04:36.904012       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	E0510 18:16:07.562224       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 18:16:07.570114       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 18:16:07.576704       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 18:16:07.584223       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 18:16:07.591210       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 18:16:07.595649       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 18:16:07.607783       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 18:16:07.608070       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [74fd0b7de642965eb7e03cf324017cb2195034685758e46efbd5e6997aba9ae5] <==
	I0510 18:02:26.095808       1 shared_informer.go:357] "Caches are synced" controller="service account"
	I0510 18:02:26.107982       1 shared_informer.go:357] "Caches are synced" controller="endpoint_slice"
	I0510 18:02:26.113575       1 shared_informer.go:357] "Caches are synced" controller="namespace"
	I0510 18:02:26.114829       1 shared_informer.go:357] "Caches are synced" controller="ReplicaSet"
	I0510 18:02:26.118078       1 shared_informer.go:357] "Caches are synced" controller="cronjob"
	I0510 18:02:26.121663       1 shared_informer.go:357] "Caches are synced" controller="daemon sets"
	I0510 18:02:26.127470       1 shared_informer.go:357] "Caches are synced" controller="deployment"
	I0510 18:02:26.128559       1 shared_informer.go:357] "Caches are synced" controller="job"
	I0510 18:02:26.135167       1 shared_informer.go:357] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0510 18:02:26.140926       1 shared_informer.go:357] "Caches are synced" controller="taint-eviction-controller"
	I0510 18:02:26.149406       1 shared_informer.go:357] "Caches are synced" controller="ClusterRoleAggregator"
	I0510 18:02:26.194556       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrapproving"
	I0510 18:02:26.234159       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0510 18:02:26.234341       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0510 18:02:26.234405       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0510 18:02:26.234431       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0510 18:02:26.248426       1 shared_informer.go:357] "Caches are synced" controller="HPA"
	I0510 18:02:26.262947       1 shared_informer.go:357] "Caches are synced" controller="persistent volume"
	I0510 18:02:26.312142       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 18:02:26.393774       1 shared_informer.go:357] "Caches are synced" controller="attach detach"
	I0510 18:02:26.402315       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 18:02:26.844411       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0510 18:02:26.844453       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0510 18:02:26.844461       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0510 18:02:26.854122       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [2b4eccacbeea6a58cc9c575f2c2bf5f8297029f9c9d2a9264bcf3e69644b4c28] <==
	E0510 18:02:23.839411       1 proxier.go:732] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0510 18:02:23.859564       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.39.52"]
	E0510 18:02:23.859640       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0510 18:02:23.913819       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0510 18:02:23.913976       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0510 18:02:23.914004       1 server_linux.go:145] "Using iptables Proxier"
	I0510 18:02:23.928588       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0510 18:02:23.928908       1 server.go:516] "Version info" version="v1.33.0"
	I0510 18:02:23.928939       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 18:02:23.939076       1 config.go:199] "Starting service config controller"
	I0510 18:02:23.939113       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0510 18:02:23.939140       1 config.go:105] "Starting endpoint slice config controller"
	I0510 18:02:23.939144       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0510 18:02:23.939155       1 config.go:440] "Starting serviceCIDR config controller"
	I0510 18:02:23.939158       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0510 18:02:23.939762       1 config.go:329] "Starting node config controller"
	I0510 18:02:23.939818       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0510 18:02:24.039379       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	I0510 18:02:24.039423       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0510 18:02:24.039626       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0510 18:02:24.040552       1 shared_informer.go:357] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [c908b9ef4e2dd4afa4d8c8077af1366569126a578de927d95d14f07813040bab] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0510 18:04:28.043194       1 server.go:704] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-581506\": dial tcp 192.168.39.52:8441: connect: connection refused"
	E0510 18:04:29.208643       1 server.go:704] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-581506\": dial tcp 192.168.39.52:8441: connect: connection refused"
	I0510 18:04:32.936969       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.39.52"]
	E0510 18:04:32.937354       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0510 18:04:33.046754       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0510 18:04:33.046916       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0510 18:04:33.046981       1 server_linux.go:145] "Using iptables Proxier"
	I0510 18:04:33.060194       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0510 18:04:33.060559       1 server.go:516] "Version info" version="v1.33.0"
	I0510 18:04:33.060762       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 18:04:33.065256       1 config.go:199] "Starting service config controller"
	I0510 18:04:33.068415       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0510 18:04:33.068567       1 config.go:105] "Starting endpoint slice config controller"
	I0510 18:04:33.068590       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0510 18:04:33.068693       1 config.go:440] "Starting serviceCIDR config controller"
	I0510 18:04:33.073303       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0510 18:04:33.073374       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	I0510 18:04:33.068977       1 config.go:329] "Starting node config controller"
	I0510 18:04:33.073428       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0510 18:04:33.169005       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0510 18:04:33.169120       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0510 18:04:33.173643       1 shared_informer.go:357] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [bc42d63e6220a437de1d056d765ed97df2e6978798401b10283f61c7b1bc895b] <==
	I0510 18:02:21.278714       1 serving.go:386] Generated self-signed cert in-memory
	W0510 18:02:22.765853       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0510 18:02:22.766075       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0510 18:02:22.766103       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0510 18:02:22.766193       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0510 18:02:22.804654       1 server.go:171] "Starting Kubernetes Scheduler" version="v1.33.0"
	I0510 18:02:22.804770       1 server.go:173] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 18:02:22.806849       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0510 18:02:22.807232       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 18:02:22.807327       1 shared_informer.go:350] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 18:02:22.807360       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0510 18:02:22.907841       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0510 18:02:47.604715       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 10 18:18:16 functional-581506 kubelet[6715]: E0510 18:18:16.833177    6715 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-581506_kube-system_0a886f34999ac0d6b56a638cab77f640_2\" already exists" pod="kube-system/kube-scheduler-functional-581506"
	May 10 18:18:16 functional-581506 kubelet[6715]: E0510 18:18:16.833295    6715 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-581506_kube-system_0a886f34999ac0d6b56a638cab77f640_2\" already exists" pod="kube-system/kube-scheduler-functional-581506"
	May 10 18:18:16 functional-581506 kubelet[6715]: E0510 18:18:16.833389    6715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-functional-581506_kube-system(0a886f34999ac0d6b56a638cab77f640)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-functional-581506_kube-system(0a886f34999ac0d6b56a638cab77f640)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-functional-581506_kube-system_0a886f34999ac0d6b56a638cab77f640_2\\\" already exists\"" pod="kube-system/kube-scheduler-functional-581506" podUID="0a886f34999ac0d6b56a638cab77f640"
	May 10 18:18:20 functional-581506 kubelet[6715]: E0510 18:18:20.178671    6715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746901100178335289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:165066,},InodesUsed:&UInt64Value{Value:82,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:18:20 functional-581506 kubelet[6715]: E0510 18:18:20.179234    6715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746901100178335289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:165066,},InodesUsed:&UInt64Value{Value:82,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:18:27 functional-581506 kubelet[6715]: E0510 18:18:27.838660    6715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-581506_kube-system_0a886f34999ac0d6b56a638cab77f640_2\" already exists"
	May 10 18:18:27 functional-581506 kubelet[6715]: E0510 18:18:27.838736    6715 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-581506_kube-system_0a886f34999ac0d6b56a638cab77f640_2\" already exists" pod="kube-system/kube-scheduler-functional-581506"
	May 10 18:18:27 functional-581506 kubelet[6715]: E0510 18:18:27.838755    6715 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-581506_kube-system_0a886f34999ac0d6b56a638cab77f640_2\" already exists" pod="kube-system/kube-scheduler-functional-581506"
	May 10 18:18:27 functional-581506 kubelet[6715]: E0510 18:18:27.838799    6715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-functional-581506_kube-system(0a886f34999ac0d6b56a638cab77f640)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-functional-581506_kube-system(0a886f34999ac0d6b56a638cab77f640)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-functional-581506_kube-system_0a886f34999ac0d6b56a638cab77f640_2\\\" already exists\"" pod="kube-system/kube-scheduler-functional-581506" podUID="0a886f34999ac0d6b56a638cab77f640"
	May 10 18:18:29 functional-581506 kubelet[6715]: E0510 18:18:29.942642    6715 manager.go:1116] Failed to create existing container: /kubepods/besteffort/podc3f4ab1a-93f7-4c1e-bcbe-5f9c9daaae46/crio-e49ea2b58308c6c0b9b2908ae1ab6a5818f361d3a75849eac0ab8eb63fab41ca: Error finding container e49ea2b58308c6c0b9b2908ae1ab6a5818f361d3a75849eac0ab8eb63fab41ca: Status 404 returned error can't find the container with id e49ea2b58308c6c0b9b2908ae1ab6a5818f361d3a75849eac0ab8eb63fab41ca
	May 10 18:18:29 functional-581506 kubelet[6715]: E0510 18:18:29.943232    6715 manager.go:1116] Failed to create existing container: /kubepods/burstable/podfe62e8740903c7a0badf385e7524512e/crio-2cc3ee9d3458fbdf619a3c176b445eff63eefe6d42ab071484b6ca448013de07: Error finding container 2cc3ee9d3458fbdf619a3c176b445eff63eefe6d42ab071484b6ca448013de07: Status 404 returned error can't find the container with id 2cc3ee9d3458fbdf619a3c176b445eff63eefe6d42ab071484b6ca448013de07
	May 10 18:18:29 functional-581506 kubelet[6715]: E0510 18:18:29.943662    6715 manager.go:1116] Failed to create existing container: /kubepods/besteffort/podea7d9372-7c9e-444b-a628-0dfc4003f07d/crio-00c4138d2ab0d3a6880991ae6ca2f7c7e3c2de33b60a469043a91f7f8adef12d: Error finding container 00c4138d2ab0d3a6880991ae6ca2f7c7e3c2de33b60a469043a91f7f8adef12d: Status 404 returned error can't find the container with id 00c4138d2ab0d3a6880991ae6ca2f7c7e3c2de33b60a469043a91f7f8adef12d
	May 10 18:18:29 functional-581506 kubelet[6715]: E0510 18:18:29.944311    6715 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod0a886f34999ac0d6b56a638cab77f640/crio-6ed00def2c968d5a51634c7dafc6e6cc749b20e361a2365659842d41ca79ff9c: Error finding container 6ed00def2c968d5a51634c7dafc6e6cc749b20e361a2365659842d41ca79ff9c: Status 404 returned error can't find the container with id 6ed00def2c968d5a51634c7dafc6e6cc749b20e361a2365659842d41ca79ff9c
	May 10 18:18:29 functional-581506 kubelet[6715]: E0510 18:18:29.944603    6715 manager.go:1116] Failed to create existing container: /kubepods/burstable/podb2dc81ade1bbda73868f61223889f8f4/crio-45aa7f96fbe49dd74e9cdfcc97884ce5caba88b39b6e9b00f2357661ecbba1a3: Error finding container 45aa7f96fbe49dd74e9cdfcc97884ce5caba88b39b6e9b00f2357661ecbba1a3: Status 404 returned error can't find the container with id 45aa7f96fbe49dd74e9cdfcc97884ce5caba88b39b6e9b00f2357661ecbba1a3
	May 10 18:18:29 functional-581506 kubelet[6715]: E0510 18:18:29.944901    6715 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod0b1c5c10-5db3-43e0-935a-0549799273f3/crio-e30af250008246b61b90a3718d1c328f2984559c29b8526e0386129454a98b4a: Error finding container e30af250008246b61b90a3718d1c328f2984559c29b8526e0386129454a98b4a: Status 404 returned error can't find the container with id e30af250008246b61b90a3718d1c328f2984559c29b8526e0386129454a98b4a
	May 10 18:18:30 functional-581506 kubelet[6715]: E0510 18:18:30.181799    6715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746901110181192391,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:165066,},InodesUsed:&UInt64Value{Value:82,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:18:30 functional-581506 kubelet[6715]: E0510 18:18:30.181952    6715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746901110181192391,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:165066,},InodesUsed:&UInt64Value{Value:82,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:18:40 functional-581506 kubelet[6715]: E0510 18:18:40.184151    6715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746901120183647609,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:165066,},InodesUsed:&UInt64Value{Value:82,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:18:40 functional-581506 kubelet[6715]: E0510 18:18:40.184199    6715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746901120183647609,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:165066,},InodesUsed:&UInt64Value{Value:82,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:18:42 functional-581506 kubelet[6715]: E0510 18:18:42.830045    6715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-581506_kube-system_0a886f34999ac0d6b56a638cab77f640_2\" already exists"
	May 10 18:18:42 functional-581506 kubelet[6715]: E0510 18:18:42.830412    6715 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-581506_kube-system_0a886f34999ac0d6b56a638cab77f640_2\" already exists" pod="kube-system/kube-scheduler-functional-581506"
	May 10 18:18:42 functional-581506 kubelet[6715]: E0510 18:18:42.830474    6715 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-581506_kube-system_0a886f34999ac0d6b56a638cab77f640_2\" already exists" pod="kube-system/kube-scheduler-functional-581506"
	May 10 18:18:42 functional-581506 kubelet[6715]: E0510 18:18:42.830583    6715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-functional-581506_kube-system(0a886f34999ac0d6b56a638cab77f640)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-functional-581506_kube-system(0a886f34999ac0d6b56a638cab77f640)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-functional-581506_kube-system_0a886f34999ac0d6b56a638cab77f640_2\\\" already exists\"" pod="kube-system/kube-scheduler-functional-581506" podUID="0a886f34999ac0d6b56a638cab77f640"
	May 10 18:18:50 functional-581506 kubelet[6715]: E0510 18:18:50.186953    6715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746901130186531322,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:165066,},InodesUsed:&UInt64Value{Value:82,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:18:50 functional-581506 kubelet[6715]: E0510 18:18:50.187050    6715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746901130186531322,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:165066,},InodesUsed:&UInt64Value{Value:82,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [206d421221f482411c4e5a5ef3f7102eccd8b38f07c242446855962f9958f985] <==
	W0510 18:18:26.014457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:28.017642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:28.023734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:30.028096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:30.033160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:32.036602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:32.046103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:34.050030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:34.055107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:36.058985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:36.064413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:38.068245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:38.077244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:40.081240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:40.087718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:42.091035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:42.100378       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:44.103196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:44.109693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:46.113155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:46.123440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:48.127563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:48.132852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:50.136646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:18:50.146058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [9ddf6914642a098d580c48db641460c4197df74a06bf7008e362f610f185934d] <==
	I0510 18:02:23.672106       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0510 18:02:23.683594       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0510 18:02:23.683625       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0510 18:02:23.702833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:02:27.159140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:02:31.422834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:02:35.021700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:02:38.075295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:02:41.098182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:02:41.109385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0510 18:02:41.109555       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0510 18:02:41.109770       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-581506_a46214da-4c1e-4fc3-976f-44d996fb2ca3!
	I0510 18:02:41.110126       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fd69f3b7-01e0-4535-950c-10464666b122", APIVersion:"v1", ResourceVersion:"525", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-581506_a46214da-4c1e-4fc3-976f-44d996fb2ca3 became leader
	W0510 18:02:41.126141       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:02:41.133416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0510 18:02:41.210982       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-581506_a46214da-4c1e-4fc3-976f-44d996fb2ca3!
	W0510 18:02:43.137335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:02:43.144935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:02:45.148031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:02:45.154106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:02:47.157824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:02:47.175021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-581506 -n functional-581506
helpers_test.go:261: (dbg) Run:  kubectl --context functional-581506 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount hello-node-connect-58f9cf68d8-prxzn hello-node-fcfd88b6f-gmdwq mysql-58ccfd96bb-2jm87 sp-pod dashboard-metrics-scraper-5d59dccf9b-w9spf kubernetes-dashboard-7779f9b69b-ljpkm
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-581506 describe pod busybox-mount hello-node-connect-58f9cf68d8-prxzn hello-node-fcfd88b6f-gmdwq mysql-58ccfd96bb-2jm87 sp-pod dashboard-metrics-scraper-5d59dccf9b-w9spf kubernetes-dashboard-7779f9b69b-ljpkm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-581506 describe pod busybox-mount hello-node-connect-58f9cf68d8-prxzn hello-node-fcfd88b6f-gmdwq mysql-58ccfd96bb-2jm87 sp-pod dashboard-metrics-scraper-5d59dccf9b-w9spf kubernetes-dashboard-7779f9b69b-ljpkm: exit status 1 (104.393411ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  mount-munger:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    Environment:  <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t5rkt (ro)
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-t5rkt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             hello-node-connect-58f9cf68d8-prxzn
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=hello-node-connect
	                  pod-template-hash=58f9cf68d8
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-connect-58f9cf68d8
	Containers:
	  echoserver:
	    Image:        registry.k8s.io/echoserver:1.8
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vjht5 (ro)
	Volumes:
	  kube-api-access-vjht5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             hello-node-fcfd88b6f-gmdwq
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=hello-node
	                  pod-template-hash=fcfd88b6f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-fcfd88b6f
	Containers:
	  echoserver:
	    Image:        registry.k8s.io/echoserver:1.8
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-56j2g (ro)
	Volumes:
	  kube-api-access-56j2g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             mysql-58ccfd96bb-2jm87
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=mysql
	                  pod-template-hash=58ccfd96bb
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/mysql-58ccfd96bb
	Containers:
	  mysql:
	    Image:      docker.io/mysql:5.7
	    Port:       3306/TCP
	    Host Port:  0/TCP
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v72cr (ro)
	Volumes:
	  kube-api-access-v72cr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  myfrontend:
	    Image:        docker.io/nginx
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6q8c7 (ro)
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-6q8c7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5d59dccf9b-w9spf" not found
	Error from server (NotFound): pods "kubernetes-dashboard-7779f9b69b-ljpkm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-581506 describe pod busybox-mount hello-node-connect-58f9cf68d8-prxzn hello-node-fcfd88b6f-gmdwq mysql-58ccfd96bb-2jm87 sp-pod dashboard-metrics-scraper-5d59dccf9b-w9spf kubernetes-dashboard-7779f9b69b-ljpkm: exit status 1
--- FAIL: TestFunctional/parallel/MySQL (603.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-581506 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-581506 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-gmdwq" [1aed5add-5cc4-421e-a8f2-4ad344b12386] Pending
functional_test.go:1467: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1467: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-581506 -n functional-581506
functional_test.go:1467: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-05-10 18:18:49.421204994 +0000 UTC m=+1603.748817636
functional_test.go:1467: (dbg) Run:  kubectl --context functional-581506 describe po hello-node-fcfd88b6f-gmdwq -n default
functional_test.go:1467: (dbg) kubectl --context functional-581506 describe po hello-node-fcfd88b6f-gmdwq -n default:
Name:             hello-node-fcfd88b6f-gmdwq
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           app=hello-node
pod-template-hash=fcfd88b6f
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/hello-node-fcfd88b6f
Containers:
echoserver:
Image:        registry.k8s.io/echoserver:1.8
Port:         <none>
Host Port:    <none>
Environment:  <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-56j2g (ro)
Volumes:
kube-api-access-56j2g:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>
functional_test.go:1467: (dbg) Run:  kubectl --context functional-581506 logs hello-node-fcfd88b6f-gmdwq -n default
functional_test.go:1467: (dbg) kubectl --context functional-581506 logs hello-node-fcfd88b6f-gmdwq -n default:
functional_test.go:1468: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.65s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (242.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-581506 /tmp/TestFunctionalparallelMountCmdany-port178078978/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1746900719767655459" to /tmp/TestFunctionalparallelMountCmdany-port178078978/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1746900719767655459" to /tmp/TestFunctionalparallelMountCmdany-port178078978/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1746900719767655459" to /tmp/TestFunctionalparallelMountCmdany-port178078978/001/test-1746900719767655459
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-581506 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (203.987378ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0510 18:11:59.971949  395980 retry.go:31] will retry after 710.712752ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 May 10 18:11 created-by-test
-rw-r--r-- 1 docker docker 24 May 10 18:11 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 May 10 18:11 test-1746900719767655459
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 ssh cat /mount-9p/test-1746900719767655459
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-581506 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [7ec17281-4d7c-4497-babf-662e25805799] Pending
E0510 18:14:37.810831  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:16:00.886666  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:153: ***** TestFunctional/parallel/MountCmd/any-port: pod "integration-test=busybox-mount" failed to start within 4m0s: context deadline exceeded ****
functional_test_mount_test.go:153: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-581506 -n functional-581506
functional_test_mount_test.go:153: TestFunctional/parallel/MountCmd/any-port: showing logs for failed pods as of 2025-05-10 18:16:01.718553783 +0000 UTC m=+1436.046166431
functional_test_mount_test.go:153: (dbg) Run:  kubectl --context functional-581506 describe po busybox-mount -n default
functional_test_mount_test.go:153: (dbg) kubectl --context functional-581506 describe po busybox-mount -n default:
Name:             busybox-mount
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           integration-test=busybox-mount
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Containers:
mount-munger:
Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
Port:       <none>
Host Port:  <none>
Command:
/bin/sh
-c
--
Args:
cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
Environment:  <none>
Mounts:
/mount-9p from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t5rkt (ro)
Volumes:
test-volume:
Type:          HostPath (bare host directory volume)
Path:          /mount-9p
HostPathType:  
kube-api-access-t5rkt:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>
functional_test_mount_test.go:153: (dbg) Run:  kubectl --context functional-581506 logs busybox-mount -n default
functional_test_mount_test.go:153: (dbg) kubectl --context functional-581506 logs busybox-mount -n default:
functional_test_mount_test.go:154: failed waiting for busybox-mount pod: integration-test=busybox-mount within 4m0s: context deadline exceeded
functional_test_mount_test.go:80: "TestFunctional/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-581506 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (215.296574ms)

                                                
                                                
-- stdout --
	192.168.39.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=1000,access=any,msize=65536,trans=tcp,noextend,port=35581)
	total 2
	-rw-r--r-- 1 docker docker 24 May 10 18:11 created-by-test
	-rw-r--r-- 1 docker docker 24 May 10 18:11 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 May 10 18:11 test-1746900719767655459
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-amd64 -p functional-581506 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-581506 /tmp/TestFunctionalparallelMountCmdany-port178078978/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-581506 /tmp/TestFunctionalparallelMountCmdany-port178078978/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalparallelMountCmdany-port178078978/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.39.1:35581
* Userspace file server: ufs starting
* Successfully mounted /tmp/TestFunctionalparallelMountCmdany-port178078978/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-581506 /tmp/TestFunctionalparallelMountCmdany-port178078978/001:/mount-9p --alsologtostderr -v=1] stderr:
I0510 18:11:59.812966  405741 out.go:345] Setting OutFile to fd 1 ...
I0510 18:11:59.813122  405741 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0510 18:11:59.813132  405741 out.go:358] Setting ErrFile to fd 2...
I0510 18:11:59.813137  405741 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0510 18:11:59.813307  405741 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-388787/.minikube/bin
I0510 18:11:59.813562  405741 mustload.go:65] Loading cluster: functional-581506
I0510 18:11:59.813956  405741 config.go:182] Loaded profile config "functional-581506": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
I0510 18:11:59.814303  405741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0510 18:11:59.814380  405741 main.go:141] libmachine: Launching plugin server for driver kvm2
I0510 18:11:59.830278  405741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35955
I0510 18:11:59.830925  405741 main.go:141] libmachine: () Calling .GetVersion
I0510 18:11:59.831531  405741 main.go:141] libmachine: Using API Version  1
I0510 18:11:59.831556  405741 main.go:141] libmachine: () Calling .SetConfigRaw
I0510 18:11:59.831974  405741 main.go:141] libmachine: () Calling .GetMachineName
I0510 18:11:59.832145  405741 main.go:141] libmachine: (functional-581506) Calling .GetState
I0510 18:11:59.834244  405741 host.go:66] Checking if "functional-581506" exists ...
I0510 18:11:59.834723  405741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0510 18:11:59.834782  405741 main.go:141] libmachine: Launching plugin server for driver kvm2
I0510 18:11:59.849983  405741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40979
I0510 18:11:59.850599  405741 main.go:141] libmachine: () Calling .GetVersion
I0510 18:11:59.851174  405741 main.go:141] libmachine: Using API Version  1
I0510 18:11:59.851200  405741 main.go:141] libmachine: () Calling .SetConfigRaw
I0510 18:11:59.851601  405741 main.go:141] libmachine: () Calling .GetMachineName
I0510 18:11:59.851778  405741 main.go:141] libmachine: (functional-581506) Calling .DriverName
I0510 18:11:59.851919  405741 main.go:141] libmachine: (functional-581506) Calling .DriverName
I0510 18:11:59.852037  405741 main.go:141] libmachine: (functional-581506) Calling .GetIP
I0510 18:11:59.855185  405741 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
I0510 18:11:59.855574  405741 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
I0510 18:11:59.855600  405741 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
I0510 18:11:59.856676  405741 main.go:141] libmachine: (functional-581506) Calling .DriverName
I0510 18:11:59.860119  405741 out.go:177] * Mounting host path /tmp/TestFunctionalparallelMountCmdany-port178078978/001 into VM as /mount-9p ...
I0510 18:11:59.861639  405741 out.go:177]   - Mount type:   9p
I0510 18:11:59.862977  405741 out.go:177]   - User ID:      docker
I0510 18:11:59.864315  405741 out.go:177]   - Group ID:     docker
I0510 18:11:59.866089  405741 out.go:177]   - Version:      9p2000.L
I0510 18:11:59.867751  405741 out.go:177]   - Message Size: 262144
I0510 18:11:59.869082  405741 out.go:177]   - Options:      map[]
I0510 18:11:59.870444  405741 out.go:177]   - Bind Address: 192.168.39.1:35581
I0510 18:11:59.872338  405741 out.go:177] * Userspace file server: 
I0510 18:11:59.872457  405741 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f /mount-9p || echo "
I0510 18:11:59.872496  405741 main.go:141] libmachine: (functional-581506) Calling .GetSSHHostname
I0510 18:11:59.875958  405741 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
I0510 18:11:59.876410  405741 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
I0510 18:11:59.876442  405741 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
I0510 18:11:59.876636  405741 main.go:141] libmachine: (functional-581506) Calling .GetSSHPort
I0510 18:11:59.876861  405741 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
I0510 18:11:59.877047  405741 main.go:141] libmachine: (functional-581506) Calling .GetSSHUsername
I0510 18:11:59.877206  405741 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/functional-581506/id_rsa Username:docker}
I0510 18:11:59.965731  405741 mount.go:180] unmount for /mount-9p ran successfully
I0510 18:11:59.965777  405741 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I0510 18:11:59.981680  405741 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=35581,trans=tcp,version=9p2000.L 192.168.39.1 /mount-9p"
I0510 18:12:00.007750  405741 main.go:125] stdlog: ufs.go:141 connected
I0510 18:12:00.009317  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tversion tag 65535 msize 65536 version '9P2000.L'
I0510 18:12:00.009392  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rversion tag 65535 msize 65536 version '9P2000'
I0510 18:12:00.009623  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I0510 18:12:00.009695  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rattach tag 0 aqid (20fa5c9 bb659895 'd')
I0510 18:12:00.010016  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tstat tag 0 fid 0
I0510 18:12:00.010134  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa5c9 bb659895 'd') m d775 at 0 mt 1746900719 l 4096 t 0 d 0 ext )
I0510 18:12:00.011036  405741 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/.mount-process: {Name:mke45a478bd26373d4d98b26d8ed4da6144bb8ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0510 18:12:00.011218  405741 mount.go:105] mount successful: ""
I0510 18:12:00.013157  405741 out.go:177] * Successfully mounted /tmp/TestFunctionalparallelMountCmdany-port178078978/001 to /mount-9p
I0510 18:12:00.014543  405741 out.go:201] 
I0510 18:12:00.015629  405741 out.go:177] * NOTE: This process must stay alive for the mount to be accessible ...
I0510 18:12:01.074636  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tstat tag 0 fid 0
I0510 18:12:01.074831  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa5c9 bb659895 'd') m d775 at 0 mt 1746900719 l 4096 t 0 d 0 ext )
I0510 18:12:01.076659  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Twalk tag 0 fid 0 newfid 1 
I0510 18:12:01.076728  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rwalk tag 0 
I0510 18:12:01.077069  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Topen tag 0 fid 1 mode 0
I0510 18:12:01.077141  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Ropen tag 0 qid (20fa5c9 bb659895 'd') iounit 0
I0510 18:12:01.077382  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tstat tag 0 fid 0
I0510 18:12:01.077480  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa5c9 bb659895 'd') m d775 at 0 mt 1746900719 l 4096 t 0 d 0 ext )
I0510 18:12:01.077801  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tread tag 0 fid 1 offset 0 count 65512
I0510 18:12:01.077976  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rread tag 0 count 258
I0510 18:12:01.078331  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tread tag 0 fid 1 offset 258 count 65254
I0510 18:12:01.078364  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rread tag 0 count 0
I0510 18:12:01.078655  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tread tag 0 fid 1 offset 258 count 65512
I0510 18:12:01.078700  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rread tag 0 count 0
I0510 18:12:01.078944  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I0510 18:12:01.078988  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rwalk tag 0 (20fa5cb bb659895 '') 
I0510 18:12:01.079217  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tstat tag 0 fid 2
I0510 18:12:01.079320  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa5cb bb659895 '') m 644 at 0 mt 1746900719 l 24 t 0 d 0 ext )
I0510 18:12:01.079529  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tstat tag 0 fid 2
I0510 18:12:01.079606  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa5cb bb659895 '') m 644 at 0 mt 1746900719 l 24 t 0 d 0 ext )
I0510 18:12:01.079803  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tclunk tag 0 fid 2
I0510 18:12:01.079856  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rclunk tag 0
I0510 18:12:01.080065  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I0510 18:12:01.080106  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rwalk tag 0 (20fa5cb bb659895 '') 
I0510 18:12:01.080305  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tstat tag 0 fid 2
I0510 18:12:01.080392  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa5cb bb659895 '') m 644 at 0 mt 1746900719 l 24 t 0 d 0 ext )
I0510 18:12:01.080587  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tclunk tag 0 fid 2
I0510 18:12:01.080612  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rclunk tag 0
I0510 18:12:01.080922  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Twalk tag 0 fid 0 newfid 2 0:'test-1746900719767655459' 
I0510 18:12:01.080970  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rwalk tag 0 (20fa5cc bb659895 '') 
I0510 18:12:01.081245  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tstat tag 0 fid 2
I0510 18:12:01.081326  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rstat tag 0 st ('test-1746900719767655459' 'jenkins' 'balintp' '' q (20fa5cc bb659895 '') m 644 at 0 mt 1746900719 l 24 t 0 d 0 ext )
I0510 18:12:01.081569  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tstat tag 0 fid 2
I0510 18:12:01.081637  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rstat tag 0 st ('test-1746900719767655459' 'jenkins' 'balintp' '' q (20fa5cc bb659895 '') m 644 at 0 mt 1746900719 l 24 t 0 d 0 ext )
I0510 18:12:01.081889  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tclunk tag 0 fid 2
I0510 18:12:01.081915  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rclunk tag 0
I0510 18:12:01.082120  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Twalk tag 0 fid 0 newfid 2 0:'test-1746900719767655459' 
I0510 18:12:01.082168  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rwalk tag 0 (20fa5cc bb659895 '') 
I0510 18:12:01.082350  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tstat tag 0 fid 2
I0510 18:12:01.082425  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rstat tag 0 st ('test-1746900719767655459' 'jenkins' 'balintp' '' q (20fa5cc bb659895 '') m 644 at 0 mt 1746900719 l 24 t 0 d 0 ext )
I0510 18:12:01.082680  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tclunk tag 0 fid 2
I0510 18:12:01.082728  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rclunk tag 0
I0510 18:12:01.083177  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I0510 18:12:01.083226  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rwalk tag 0 (20fa5ca bb659895 '') 
I0510 18:12:01.083485  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tstat tag 0 fid 2
I0510 18:12:01.083560  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa5ca bb659895 '') m 644 at 0 mt 1746900719 l 24 t 0 d 0 ext )
I0510 18:12:01.083922  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tstat tag 0 fid 2
I0510 18:12:01.083999  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa5ca bb659895 '') m 644 at 0 mt 1746900719 l 24 t 0 d 0 ext )
I0510 18:12:01.084264  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tclunk tag 0 fid 2
I0510 18:12:01.084288  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rclunk tag 0
I0510 18:12:01.084474  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I0510 18:12:01.084515  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rwalk tag 0 (20fa5ca bb659895 '') 
I0510 18:12:01.084691  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tstat tag 0 fid 2
I0510 18:12:01.084775  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa5ca bb659895 '') m 644 at 0 mt 1746900719 l 24 t 0 d 0 ext )
I0510 18:12:01.084966  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tclunk tag 0 fid 2
I0510 18:12:01.084993  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rclunk tag 0
I0510 18:12:01.085235  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tread tag 0 fid 1 offset 258 count 65512
I0510 18:12:01.085266  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rread tag 0 count 0
I0510 18:12:01.085557  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tclunk tag 0 fid 1
I0510 18:12:01.085591  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rclunk tag 0
I0510 18:12:01.294864  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Twalk tag 0 fid 0 newfid 1 0:'test-1746900719767655459' 
I0510 18:12:01.294961  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rwalk tag 0 (20fa5cc bb659895 '') 
I0510 18:12:01.295431  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tstat tag 0 fid 1
I0510 18:12:01.295620  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rstat tag 0 st ('test-1746900719767655459' 'jenkins' 'balintp' '' q (20fa5cc bb659895 '') m 644 at 0 mt 1746900719 l 24 t 0 d 0 ext )
I0510 18:12:01.295909  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Twalk tag 0 fid 1 newfid 2 
I0510 18:12:01.295961  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rwalk tag 0 
I0510 18:12:01.296217  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Topen tag 0 fid 2 mode 0
I0510 18:12:01.296304  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Ropen tag 0 qid (20fa5cc bb659895 '') iounit 0
I0510 18:12:01.296476  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tstat tag 0 fid 1
I0510 18:12:01.296596  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rstat tag 0 st ('test-1746900719767655459' 'jenkins' 'balintp' '' q (20fa5cc bb659895 '') m 644 at 0 mt 1746900719 l 24 t 0 d 0 ext )
I0510 18:12:01.296903  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tread tag 0 fid 2 offset 0 count 65512
I0510 18:12:01.296968  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rread tag 0 count 24
I0510 18:12:01.297130  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tread tag 0 fid 2 offset 24 count 65512
I0510 18:12:01.297172  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rread tag 0 count 0
I0510 18:12:01.297360  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tread tag 0 fid 2 offset 24 count 65512
I0510 18:12:01.297395  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rread tag 0 count 0
I0510 18:12:01.297604  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tclunk tag 0 fid 2
I0510 18:12:01.297664  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rclunk tag 0
I0510 18:12:01.297878  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tclunk tag 0 fid 1
I0510 18:12:01.297913  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rclunk tag 0
I0510 18:16:02.036835  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tstat tag 0 fid 0
I0510 18:16:02.037089  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa5c9 bb659895 'd') m d775 at 0 mt 1746900719 l 4096 t 0 d 0 ext )
I0510 18:16:02.038840  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Twalk tag 0 fid 0 newfid 1 
I0510 18:16:02.038916  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rwalk tag 0 
I0510 18:16:02.039296  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Topen tag 0 fid 1 mode 0
I0510 18:16:02.039410  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Ropen tag 0 qid (20fa5c9 bb659895 'd') iounit 0
I0510 18:16:02.039733  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tstat tag 0 fid 0
I0510 18:16:02.039899  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa5c9 bb659895 'd') m d775 at 0 mt 1746900719 l 4096 t 0 d 0 ext )
I0510 18:16:02.040457  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tread tag 0 fid 1 offset 0 count 65512
I0510 18:16:02.040639  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rread tag 0 count 258
I0510 18:16:02.041113  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tread tag 0 fid 1 offset 258 count 65254
I0510 18:16:02.041157  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rread tag 0 count 0
I0510 18:16:02.041516  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tread tag 0 fid 1 offset 258 count 65512
I0510 18:16:02.041565  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rread tag 0 count 0
I0510 18:16:02.041842  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I0510 18:16:02.041905  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rwalk tag 0 (20fa5cb bb659895 '') 
I0510 18:16:02.042238  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tstat tag 0 fid 2
I0510 18:16:02.042368  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa5cb bb659895 '') m 644 at 0 mt 1746900719 l 24 t 0 d 0 ext )
I0510 18:16:02.042766  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tstat tag 0 fid 2
I0510 18:16:02.042900  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa5cb bb659895 '') m 644 at 0 mt 1746900719 l 24 t 0 d 0 ext )
I0510 18:16:02.043263  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tclunk tag 0 fid 2
I0510 18:16:02.043302  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rclunk tag 0
I0510 18:16:02.043746  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I0510 18:16:02.043794  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rwalk tag 0 (20fa5cb bb659895 '') 
I0510 18:16:02.044228  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tstat tag 0 fid 2
I0510 18:16:02.044351  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa5cb bb659895 '') m 644 at 0 mt 1746900719 l 24 t 0 d 0 ext )
I0510 18:16:02.044670  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tclunk tag 0 fid 2
I0510 18:16:02.044702  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rclunk tag 0
I0510 18:16:02.045060  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Twalk tag 0 fid 0 newfid 2 0:'test-1746900719767655459' 
I0510 18:16:02.045114  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rwalk tag 0 (20fa5cc bb659895 '') 
I0510 18:16:02.045410  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tstat tag 0 fid 2
I0510 18:16:02.045508  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rstat tag 0 st ('test-1746900719767655459' 'jenkins' 'balintp' '' q (20fa5cc bb659895 '') m 644 at 0 mt 1746900719 l 24 t 0 d 0 ext )
I0510 18:16:02.045870  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tstat tag 0 fid 2
I0510 18:16:02.045979  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rstat tag 0 st ('test-1746900719767655459' 'jenkins' 'balintp' '' q (20fa5cc bb659895 '') m 644 at 0 mt 1746900719 l 24 t 0 d 0 ext )
I0510 18:16:02.046371  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tclunk tag 0 fid 2
I0510 18:16:02.046401  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rclunk tag 0
I0510 18:16:02.046717  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Twalk tag 0 fid 0 newfid 2 0:'test-1746900719767655459' 
I0510 18:16:02.046760  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rwalk tag 0 (20fa5cc bb659895 '') 
I0510 18:16:02.047102  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tstat tag 0 fid 2
I0510 18:16:02.047212  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rstat tag 0 st ('test-1746900719767655459' 'jenkins' 'balintp' '' q (20fa5cc bb659895 '') m 644 at 0 mt 1746900719 l 24 t 0 d 0 ext )
I0510 18:16:02.047539  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tclunk tag 0 fid 2
I0510 18:16:02.047576  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rclunk tag 0
I0510 18:16:02.047920  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I0510 18:16:02.047960  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rwalk tag 0 (20fa5ca bb659895 '') 
I0510 18:16:02.048189  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tstat tag 0 fid 2
I0510 18:16:02.048284  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa5ca bb659895 '') m 644 at 0 mt 1746900719 l 24 t 0 d 0 ext )
I0510 18:16:02.048527  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tstat tag 0 fid 2
I0510 18:16:02.048614  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa5ca bb659895 '') m 644 at 0 mt 1746900719 l 24 t 0 d 0 ext )
I0510 18:16:02.048883  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tclunk tag 0 fid 2
I0510 18:16:02.048922  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rclunk tag 0
I0510 18:16:02.049116  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I0510 18:16:02.049156  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rwalk tag 0 (20fa5ca bb659895 '') 
I0510 18:16:02.049330  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tstat tag 0 fid 2
I0510 18:16:02.049424  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa5ca bb659895 '') m 644 at 0 mt 1746900719 l 24 t 0 d 0 ext )
I0510 18:16:02.049615  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tclunk tag 0 fid 2
I0510 18:16:02.049641  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rclunk tag 0
I0510 18:16:02.049826  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tread tag 0 fid 1 offset 258 count 65512
I0510 18:16:02.049873  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rread tag 0 count 0
I0510 18:16:02.050073  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tclunk tag 0 fid 1
I0510 18:16:02.050121  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rclunk tag 0
I0510 18:16:02.053100  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I0510 18:16:02.053157  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rerror tag 0 ename 'file not found' ecode 0
I0510 18:16:02.267287  405741 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.52:53434 Tclunk tag 0 fid 0
I0510 18:16:02.267343  405741 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.52:53434 Rclunk tag 0
I0510 18:16:02.271789  405741 main.go:125] stdlog: ufs.go:147 disconnected
I0510 18:16:02.492148  405741 out.go:177] * Unmounting /mount-9p ...
I0510 18:16:02.493513  405741 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f /mount-9p || echo "
I0510 18:16:02.502370  405741 mount.go:180] unmount for /mount-9p ran successfully
I0510 18:16:02.502504  405741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/.mount-process: {Name:mke45a478bd26373d4d98b26d8ed4da6144bb8ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0510 18:16:02.504431  405741 out.go:201] 
W0510 18:16:02.506029  405741 out.go:270] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I0510 18:16:02.507254  405741 out.go:201] 
--- FAIL: TestFunctional/parallel/MountCmd/any-port (242.82s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 service --namespace=default --https --url hello-node
functional_test.go:1526: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-581506 service --namespace=default --https --url hello-node: exit status 115 (332.085555ms)

                                                
                                                
-- stdout --
	https://192.168.39.52:32417
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1528: failed to get service url. args "out/minikube-linux-amd64 -p functional-581506 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 service hello-node --url --format={{.IP}}
functional_test.go:1557: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-581506 service hello-node --url --format={{.IP}}: exit status 115 (331.904468ms)

                                                
                                                
-- stdout --
	192.168.39.52
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1559: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-581506 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 service hello-node --url
functional_test.go:1576: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-581506 service hello-node --url: exit status 115 (322.642688ms)

                                                
                                                
-- stdout --
	http://192.168.39.52:32417
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1578: failed to get service url. args: "out/minikube-linux-amd64 -p functional-581506 service hello-node --url": exit status 115
functional_test.go:1582: found endpoint for hello-node: http://192.168.39.52:32417
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                    
x
+
TestPreload (289.32s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-513090 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0510 19:03:48.489912  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:04:37.810930  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-513090 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m8.405507947s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-513090 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-513090 image pull gcr.io/k8s-minikube/busybox: (2.468895968s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-513090
E0510 19:06:00.894563  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-513090: (1m31.019995247s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-513090 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-513090 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m4.021445427s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-513090 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:631: *** TestPreload FAILED at 2025-05-10 19:08:20.092734622 +0000 UTC m=+4574.420347257
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-513090 -n test-preload-513090
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-513090 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-513090 logs -n 25: (1.228424312s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-247612 ssh -n                                                                 | multinode-247612     | jenkins | v1.35.0 | 10 May 25 18:51 UTC | 10 May 25 18:51 UTC |
	|         | multinode-247612-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-247612 ssh -n multinode-247612 sudo cat                                       | multinode-247612     | jenkins | v1.35.0 | 10 May 25 18:51 UTC | 10 May 25 18:51 UTC |
	|         | /home/docker/cp-test_multinode-247612-m03_multinode-247612.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-247612 cp multinode-247612-m03:/home/docker/cp-test.txt                       | multinode-247612     | jenkins | v1.35.0 | 10 May 25 18:51 UTC | 10 May 25 18:51 UTC |
	|         | multinode-247612-m02:/home/docker/cp-test_multinode-247612-m03_multinode-247612-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-247612 ssh -n                                                                 | multinode-247612     | jenkins | v1.35.0 | 10 May 25 18:51 UTC | 10 May 25 18:51 UTC |
	|         | multinode-247612-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-247612 ssh -n multinode-247612-m02 sudo cat                                   | multinode-247612     | jenkins | v1.35.0 | 10 May 25 18:51 UTC | 10 May 25 18:51 UTC |
	|         | /home/docker/cp-test_multinode-247612-m03_multinode-247612-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-247612 node stop m03                                                          | multinode-247612     | jenkins | v1.35.0 | 10 May 25 18:51 UTC | 10 May 25 18:52 UTC |
	| node    | multinode-247612 node start                                                             | multinode-247612     | jenkins | v1.35.0 | 10 May 25 18:52 UTC | 10 May 25 18:52 UTC |
	|         | m03 -v=5 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-247612                                                                | multinode-247612     | jenkins | v1.35.0 | 10 May 25 18:52 UTC |                     |
	| stop    | -p multinode-247612                                                                     | multinode-247612     | jenkins | v1.35.0 | 10 May 25 18:52 UTC | 10 May 25 18:55 UTC |
	| start   | -p multinode-247612                                                                     | multinode-247612     | jenkins | v1.35.0 | 10 May 25 18:55 UTC | 10 May 25 18:58 UTC |
	|         | --wait=true -v=5                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-247612                                                                | multinode-247612     | jenkins | v1.35.0 | 10 May 25 18:58 UTC |                     |
	| node    | multinode-247612 node delete                                                            | multinode-247612     | jenkins | v1.35.0 | 10 May 25 18:58 UTC | 10 May 25 18:58 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-247612 stop                                                                   | multinode-247612     | jenkins | v1.35.0 | 10 May 25 18:58 UTC | 10 May 25 19:01 UTC |
	| start   | -p multinode-247612                                                                     | multinode-247612     | jenkins | v1.35.0 | 10 May 25 19:01 UTC | 10 May 25 19:02 UTC |
	|         | --wait=true -v=5                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-247612                                                                | multinode-247612     | jenkins | v1.35.0 | 10 May 25 19:02 UTC |                     |
	| start   | -p multinode-247612-m02                                                                 | multinode-247612-m02 | jenkins | v1.35.0 | 10 May 25 19:02 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-247612-m03                                                                 | multinode-247612-m03 | jenkins | v1.35.0 | 10 May 25 19:02 UTC | 10 May 25 19:03 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-247612                                                                 | multinode-247612     | jenkins | v1.35.0 | 10 May 25 19:03 UTC |                     |
	| delete  | -p multinode-247612-m03                                                                 | multinode-247612-m03 | jenkins | v1.35.0 | 10 May 25 19:03 UTC | 10 May 25 19:03 UTC |
	| delete  | -p multinode-247612                                                                     | multinode-247612     | jenkins | v1.35.0 | 10 May 25 19:03 UTC | 10 May 25 19:03 UTC |
	| start   | -p test-preload-513090                                                                  | test-preload-513090  | jenkins | v1.35.0 | 10 May 25 19:03 UTC | 10 May 25 19:05 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-513090 image pull                                                          | test-preload-513090  | jenkins | v1.35.0 | 10 May 25 19:05 UTC | 10 May 25 19:05 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-513090                                                                  | test-preload-513090  | jenkins | v1.35.0 | 10 May 25 19:05 UTC | 10 May 25 19:07 UTC |
	| start   | -p test-preload-513090                                                                  | test-preload-513090  | jenkins | v1.35.0 | 10 May 25 19:07 UTC | 10 May 25 19:08 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-513090 image list                                                          | test-preload-513090  | jenkins | v1.35.0 | 10 May 25 19:08 UTC | 10 May 25 19:08 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/05/10 19:07:15
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0510 19:07:15.869634  431837 out.go:345] Setting OutFile to fd 1 ...
	I0510 19:07:15.869916  431837 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 19:07:15.869925  431837 out.go:358] Setting ErrFile to fd 2...
	I0510 19:07:15.869930  431837 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 19:07:15.870134  431837 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-388787/.minikube/bin
	I0510 19:07:15.870679  431837 out.go:352] Setting JSON to false
	I0510 19:07:15.871733  431837 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":31784,"bootTime":1746872252,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 19:07:15.871799  431837 start.go:140] virtualization: kvm guest
	I0510 19:07:15.874033  431837 out.go:177] * [test-preload-513090] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 19:07:15.875511  431837 notify.go:220] Checking for updates...
	I0510 19:07:15.875545  431837 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 19:07:15.876935  431837 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 19:07:15.878494  431837 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-388787/kubeconfig
	I0510 19:07:15.879997  431837 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-388787/.minikube
	I0510 19:07:15.881331  431837 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 19:07:15.882812  431837 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 19:07:15.884697  431837 config.go:182] Loaded profile config "test-preload-513090": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0510 19:07:15.885454  431837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:07:15.885546  431837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:07:15.901512  431837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41193
	I0510 19:07:15.902075  431837 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:07:15.902643  431837 main.go:141] libmachine: Using API Version  1
	I0510 19:07:15.902674  431837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:07:15.903127  431837 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:07:15.903356  431837 main.go:141] libmachine: (test-preload-513090) Calling .DriverName
	I0510 19:07:15.905267  431837 out.go:177] * Kubernetes 1.33.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.33.0
	I0510 19:07:15.906613  431837 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 19:07:15.906951  431837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:07:15.907000  431837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:07:15.922239  431837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34165
	I0510 19:07:15.922700  431837 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:07:15.923177  431837 main.go:141] libmachine: Using API Version  1
	I0510 19:07:15.923207  431837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:07:15.923653  431837 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:07:15.923926  431837 main.go:141] libmachine: (test-preload-513090) Calling .DriverName
	I0510 19:07:15.962562  431837 out.go:177] * Using the kvm2 driver based on existing profile
	I0510 19:07:15.964005  431837 start.go:304] selected driver: kvm2
	I0510 19:07:15.964031  431837 start.go:908] validating driver "kvm2" against &{Name:test-preload-513090 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-513090 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.59 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 19:07:15.964163  431837 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 19:07:15.965197  431837 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 19:07:15.965276  431837 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20720-388787/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0510 19:07:15.981143  431837 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0510 19:07:15.981516  431837 start_flags.go:975] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0510 19:07:15.981547  431837 cni.go:84] Creating CNI manager for ""
	I0510 19:07:15.981594  431837 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0510 19:07:15.981646  431837 start.go:347] cluster config:
	{Name:test-preload-513090 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-513090 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.59 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 19:07:15.981746  431837 iso.go:125] acquiring lock: {Name:mk19640015999219180c6685480547adf0c02201 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 19:07:15.983876  431837 out.go:177] * Starting "test-preload-513090" primary control-plane node in "test-preload-513090" cluster
	I0510 19:07:15.985102  431837 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0510 19:07:16.014048  431837 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0510 19:07:16.014077  431837 cache.go:56] Caching tarball of preloaded images
	I0510 19:07:16.014263  431837 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0510 19:07:16.016073  431837 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0510 19:07:16.017393  431837 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0510 19:07:16.039782  431837 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0510 19:07:19.347926  431837 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0510 19:07:19.348025  431837 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0510 19:07:20.208407  431837 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0510 19:07:20.208554  431837 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/test-preload-513090/config.json ...
	I0510 19:07:20.208823  431837 start.go:360] acquireMachinesLock for test-preload-513090: {Name:mk11499d7756d503a7a24339ad1a7f9ab9dc0fab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0510 19:07:20.208892  431837 start.go:364] duration metric: took 44.2µs to acquireMachinesLock for "test-preload-513090"
	I0510 19:07:20.208908  431837 start.go:96] Skipping create...Using existing machine configuration
	I0510 19:07:20.208914  431837 fix.go:54] fixHost starting: 
	I0510 19:07:20.209166  431837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:07:20.209203  431837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:07:20.224488  431837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37489
	I0510 19:07:20.225022  431837 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:07:20.225508  431837 main.go:141] libmachine: Using API Version  1
	I0510 19:07:20.225537  431837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:07:20.225986  431837 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:07:20.226283  431837 main.go:141] libmachine: (test-preload-513090) Calling .DriverName
	I0510 19:07:20.226443  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetState
	I0510 19:07:20.228506  431837 fix.go:112] recreateIfNeeded on test-preload-513090: state=Stopped err=<nil>
	I0510 19:07:20.228549  431837 main.go:141] libmachine: (test-preload-513090) Calling .DriverName
	W0510 19:07:20.228755  431837 fix.go:138] unexpected machine state, will restart: <nil>
	I0510 19:07:20.231298  431837 out.go:177] * Restarting existing kvm2 VM for "test-preload-513090" ...
	I0510 19:07:20.232840  431837 main.go:141] libmachine: (test-preload-513090) Calling .Start
	I0510 19:07:20.233073  431837 main.go:141] libmachine: (test-preload-513090) starting domain...
	I0510 19:07:20.233097  431837 main.go:141] libmachine: (test-preload-513090) ensuring networks are active...
	I0510 19:07:20.233854  431837 main.go:141] libmachine: (test-preload-513090) Ensuring network default is active
	I0510 19:07:20.234133  431837 main.go:141] libmachine: (test-preload-513090) Ensuring network mk-test-preload-513090 is active
	I0510 19:07:20.234482  431837 main.go:141] libmachine: (test-preload-513090) getting domain XML...
	I0510 19:07:20.235399  431837 main.go:141] libmachine: (test-preload-513090) creating domain...
	I0510 19:07:21.477464  431837 main.go:141] libmachine: (test-preload-513090) waiting for IP...
	I0510 19:07:21.478637  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:21.479228  431837 main.go:141] libmachine: (test-preload-513090) DBG | unable to find current IP address of domain test-preload-513090 in network mk-test-preload-513090
	I0510 19:07:21.479324  431837 main.go:141] libmachine: (test-preload-513090) DBG | I0510 19:07:21.479200  431873 retry.go:31] will retry after 279.920039ms: waiting for domain to come up
	I0510 19:07:21.760996  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:21.761445  431837 main.go:141] libmachine: (test-preload-513090) DBG | unable to find current IP address of domain test-preload-513090 in network mk-test-preload-513090
	I0510 19:07:21.761478  431837 main.go:141] libmachine: (test-preload-513090) DBG | I0510 19:07:21.761409  431873 retry.go:31] will retry after 313.450585ms: waiting for domain to come up
	I0510 19:07:22.077152  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:22.077583  431837 main.go:141] libmachine: (test-preload-513090) DBG | unable to find current IP address of domain test-preload-513090 in network mk-test-preload-513090
	I0510 19:07:22.077611  431837 main.go:141] libmachine: (test-preload-513090) DBG | I0510 19:07:22.077560  431873 retry.go:31] will retry after 398.353964ms: waiting for domain to come up
	I0510 19:07:22.477441  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:22.477921  431837 main.go:141] libmachine: (test-preload-513090) DBG | unable to find current IP address of domain test-preload-513090 in network mk-test-preload-513090
	I0510 19:07:22.477994  431837 main.go:141] libmachine: (test-preload-513090) DBG | I0510 19:07:22.477903  431873 retry.go:31] will retry after 561.829368ms: waiting for domain to come up
	I0510 19:07:23.041871  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:23.042321  431837 main.go:141] libmachine: (test-preload-513090) DBG | unable to find current IP address of domain test-preload-513090 in network mk-test-preload-513090
	I0510 19:07:23.042349  431837 main.go:141] libmachine: (test-preload-513090) DBG | I0510 19:07:23.042290  431873 retry.go:31] will retry after 597.838476ms: waiting for domain to come up
	I0510 19:07:23.642477  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:23.642993  431837 main.go:141] libmachine: (test-preload-513090) DBG | unable to find current IP address of domain test-preload-513090 in network mk-test-preload-513090
	I0510 19:07:23.643024  431837 main.go:141] libmachine: (test-preload-513090) DBG | I0510 19:07:23.642947  431873 retry.go:31] will retry after 908.746083ms: waiting for domain to come up
	I0510 19:07:24.553069  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:24.553566  431837 main.go:141] libmachine: (test-preload-513090) DBG | unable to find current IP address of domain test-preload-513090 in network mk-test-preload-513090
	I0510 19:07:24.553606  431837 main.go:141] libmachine: (test-preload-513090) DBG | I0510 19:07:24.553552  431873 retry.go:31] will retry after 1.02248999s: waiting for domain to come up
	I0510 19:07:25.577402  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:25.577880  431837 main.go:141] libmachine: (test-preload-513090) DBG | unable to find current IP address of domain test-preload-513090 in network mk-test-preload-513090
	I0510 19:07:25.577905  431837 main.go:141] libmachine: (test-preload-513090) DBG | I0510 19:07:25.577843  431873 retry.go:31] will retry after 1.015231451s: waiting for domain to come up
	I0510 19:07:26.595328  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:26.595771  431837 main.go:141] libmachine: (test-preload-513090) DBG | unable to find current IP address of domain test-preload-513090 in network mk-test-preload-513090
	I0510 19:07:26.595804  431837 main.go:141] libmachine: (test-preload-513090) DBG | I0510 19:07:26.595732  431873 retry.go:31] will retry after 1.139069093s: waiting for domain to come up
	I0510 19:07:27.737123  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:27.737549  431837 main.go:141] libmachine: (test-preload-513090) DBG | unable to find current IP address of domain test-preload-513090 in network mk-test-preload-513090
	I0510 19:07:27.737572  431837 main.go:141] libmachine: (test-preload-513090) DBG | I0510 19:07:27.737524  431873 retry.go:31] will retry after 1.456520422s: waiting for domain to come up
	I0510 19:07:29.196285  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:29.196817  431837 main.go:141] libmachine: (test-preload-513090) DBG | unable to find current IP address of domain test-preload-513090 in network mk-test-preload-513090
	I0510 19:07:29.196851  431837 main.go:141] libmachine: (test-preload-513090) DBG | I0510 19:07:29.196751  431873 retry.go:31] will retry after 2.591463932s: waiting for domain to come up
	I0510 19:07:31.791125  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:31.791550  431837 main.go:141] libmachine: (test-preload-513090) DBG | unable to find current IP address of domain test-preload-513090 in network mk-test-preload-513090
	I0510 19:07:31.791589  431837 main.go:141] libmachine: (test-preload-513090) DBG | I0510 19:07:31.791517  431873 retry.go:31] will retry after 3.462350172s: waiting for domain to come up
	I0510 19:07:35.255698  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:35.256192  431837 main.go:141] libmachine: (test-preload-513090) DBG | unable to find current IP address of domain test-preload-513090 in network mk-test-preload-513090
	I0510 19:07:35.256221  431837 main.go:141] libmachine: (test-preload-513090) DBG | I0510 19:07:35.256130  431873 retry.go:31] will retry after 4.545611926s: waiting for domain to come up
	I0510 19:07:39.806888  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:39.807398  431837 main.go:141] libmachine: (test-preload-513090) found domain IP: 192.168.39.59
	I0510 19:07:39.807426  431837 main.go:141] libmachine: (test-preload-513090) reserving static IP address...
	I0510 19:07:39.807443  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has current primary IP address 192.168.39.59 and MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:39.807884  431837 main.go:141] libmachine: (test-preload-513090) reserved static IP address 192.168.39.59 for domain test-preload-513090
	I0510 19:07:39.807922  431837 main.go:141] libmachine: (test-preload-513090) waiting for SSH...
	I0510 19:07:39.807942  431837 main.go:141] libmachine: (test-preload-513090) DBG | found host DHCP lease matching {name: "test-preload-513090", mac: "52:54:00:3c:07:7e", ip: "192.168.39.59"} in network mk-test-preload-513090: {Iface:virbr1 ExpiryTime:2025-05-10 20:07:32 +0000 UTC Type:0 Mac:52:54:00:3c:07:7e Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:test-preload-513090 Clientid:01:52:54:00:3c:07:7e}
	I0510 19:07:39.807962  431837 main.go:141] libmachine: (test-preload-513090) DBG | skip adding static IP to network mk-test-preload-513090 - found existing host DHCP lease matching {name: "test-preload-513090", mac: "52:54:00:3c:07:7e", ip: "192.168.39.59"}
	I0510 19:07:39.807979  431837 main.go:141] libmachine: (test-preload-513090) DBG | Getting to WaitForSSH function...
	I0510 19:07:39.810737  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:39.811041  431837 main.go:141] libmachine: (test-preload-513090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:7e", ip: ""} in network mk-test-preload-513090: {Iface:virbr1 ExpiryTime:2025-05-10 20:07:32 +0000 UTC Type:0 Mac:52:54:00:3c:07:7e Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:test-preload-513090 Clientid:01:52:54:00:3c:07:7e}
	I0510 19:07:39.811072  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined IP address 192.168.39.59 and MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:39.811218  431837 main.go:141] libmachine: (test-preload-513090) DBG | Using SSH client type: external
	I0510 19:07:39.811269  431837 main.go:141] libmachine: (test-preload-513090) DBG | Using SSH private key: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/test-preload-513090/id_rsa (-rw-------)
	I0510 19:07:39.811305  431837 main.go:141] libmachine: (test-preload-513090) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.59 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20720-388787/.minikube/machines/test-preload-513090/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0510 19:07:39.811341  431837 main.go:141] libmachine: (test-preload-513090) DBG | About to run SSH command:
	I0510 19:07:39.811354  431837 main.go:141] libmachine: (test-preload-513090) DBG | exit 0
	I0510 19:07:39.940066  431837 main.go:141] libmachine: (test-preload-513090) DBG | SSH cmd err, output: <nil>: 
	I0510 19:07:39.940498  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetConfigRaw
	I0510 19:07:39.941131  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetIP
	I0510 19:07:39.943916  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:39.944358  431837 main.go:141] libmachine: (test-preload-513090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:7e", ip: ""} in network mk-test-preload-513090: {Iface:virbr1 ExpiryTime:2025-05-10 20:07:32 +0000 UTC Type:0 Mac:52:54:00:3c:07:7e Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:test-preload-513090 Clientid:01:52:54:00:3c:07:7e}
	I0510 19:07:39.944393  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined IP address 192.168.39.59 and MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:39.944670  431837 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/test-preload-513090/config.json ...
	I0510 19:07:39.944903  431837 machine.go:93] provisionDockerMachine start ...
	I0510 19:07:39.944928  431837 main.go:141] libmachine: (test-preload-513090) Calling .DriverName
	I0510 19:07:39.945205  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHHostname
	I0510 19:07:39.947778  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:39.948154  431837 main.go:141] libmachine: (test-preload-513090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:7e", ip: ""} in network mk-test-preload-513090: {Iface:virbr1 ExpiryTime:2025-05-10 20:07:32 +0000 UTC Type:0 Mac:52:54:00:3c:07:7e Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:test-preload-513090 Clientid:01:52:54:00:3c:07:7e}
	I0510 19:07:39.948177  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined IP address 192.168.39.59 and MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:39.948315  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHPort
	I0510 19:07:39.948497  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHKeyPath
	I0510 19:07:39.948660  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHKeyPath
	I0510 19:07:39.948816  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHUsername
	I0510 19:07:39.948951  431837 main.go:141] libmachine: Using SSH client type: native
	I0510 19:07:39.949276  431837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I0510 19:07:39.949296  431837 main.go:141] libmachine: About to run SSH command:
	hostname
	I0510 19:07:40.056539  431837 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0510 19:07:40.056578  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetMachineName
	I0510 19:07:40.056940  431837 buildroot.go:166] provisioning hostname "test-preload-513090"
	I0510 19:07:40.056989  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetMachineName
	I0510 19:07:40.057205  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHHostname
	I0510 19:07:40.059912  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:40.060283  431837 main.go:141] libmachine: (test-preload-513090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:7e", ip: ""} in network mk-test-preload-513090: {Iface:virbr1 ExpiryTime:2025-05-10 20:07:32 +0000 UTC Type:0 Mac:52:54:00:3c:07:7e Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:test-preload-513090 Clientid:01:52:54:00:3c:07:7e}
	I0510 19:07:40.060305  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined IP address 192.168.39.59 and MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:40.060486  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHPort
	I0510 19:07:40.060698  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHKeyPath
	I0510 19:07:40.060898  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHKeyPath
	I0510 19:07:40.061030  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHUsername
	I0510 19:07:40.061283  431837 main.go:141] libmachine: Using SSH client type: native
	I0510 19:07:40.061575  431837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I0510 19:07:40.061593  431837 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-513090 && echo "test-preload-513090" | sudo tee /etc/hostname
	I0510 19:07:40.186176  431837 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-513090
	
	I0510 19:07:40.186212  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHHostname
	I0510 19:07:40.189098  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:40.189429  431837 main.go:141] libmachine: (test-preload-513090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:7e", ip: ""} in network mk-test-preload-513090: {Iface:virbr1 ExpiryTime:2025-05-10 20:07:32 +0000 UTC Type:0 Mac:52:54:00:3c:07:7e Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:test-preload-513090 Clientid:01:52:54:00:3c:07:7e}
	I0510 19:07:40.189462  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined IP address 192.168.39.59 and MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:40.189622  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHPort
	I0510 19:07:40.189864  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHKeyPath
	I0510 19:07:40.190031  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHKeyPath
	I0510 19:07:40.190241  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHUsername
	I0510 19:07:40.190380  431837 main.go:141] libmachine: Using SSH client type: native
	I0510 19:07:40.190726  431837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I0510 19:07:40.190746  431837 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-513090' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-513090/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-513090' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0510 19:07:40.310028  431837 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0510 19:07:40.310060  431837 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20720-388787/.minikube CaCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20720-388787/.minikube}
	I0510 19:07:40.310105  431837 buildroot.go:174] setting up certificates
	I0510 19:07:40.310116  431837 provision.go:84] configureAuth start
	I0510 19:07:40.310126  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetMachineName
	I0510 19:07:40.310433  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetIP
	I0510 19:07:40.313143  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:40.313423  431837 main.go:141] libmachine: (test-preload-513090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:7e", ip: ""} in network mk-test-preload-513090: {Iface:virbr1 ExpiryTime:2025-05-10 20:07:32 +0000 UTC Type:0 Mac:52:54:00:3c:07:7e Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:test-preload-513090 Clientid:01:52:54:00:3c:07:7e}
	I0510 19:07:40.313449  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined IP address 192.168.39.59 and MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:40.313566  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHHostname
	I0510 19:07:40.315539  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:40.315903  431837 main.go:141] libmachine: (test-preload-513090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:7e", ip: ""} in network mk-test-preload-513090: {Iface:virbr1 ExpiryTime:2025-05-10 20:07:32 +0000 UTC Type:0 Mac:52:54:00:3c:07:7e Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:test-preload-513090 Clientid:01:52:54:00:3c:07:7e}
	I0510 19:07:40.315937  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined IP address 192.168.39.59 and MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:40.316146  431837 provision.go:143] copyHostCerts
	I0510 19:07:40.316226  431837 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem, removing ...
	I0510 19:07:40.316244  431837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem
	I0510 19:07:40.316307  431837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem (1078 bytes)
	I0510 19:07:40.316399  431837 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem, removing ...
	I0510 19:07:40.316410  431837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem
	I0510 19:07:40.316434  431837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem (1123 bytes)
	I0510 19:07:40.316493  431837 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem, removing ...
	I0510 19:07:40.316503  431837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem
	I0510 19:07:40.316536  431837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem (1675 bytes)
	I0510 19:07:40.316628  431837 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem org=jenkins.test-preload-513090 san=[127.0.0.1 192.168.39.59 localhost minikube test-preload-513090]
	I0510 19:07:40.546937  431837 provision.go:177] copyRemoteCerts
	I0510 19:07:40.547009  431837 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0510 19:07:40.547036  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHHostname
	I0510 19:07:40.549802  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:40.550122  431837 main.go:141] libmachine: (test-preload-513090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:7e", ip: ""} in network mk-test-preload-513090: {Iface:virbr1 ExpiryTime:2025-05-10 20:07:32 +0000 UTC Type:0 Mac:52:54:00:3c:07:7e Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:test-preload-513090 Clientid:01:52:54:00:3c:07:7e}
	I0510 19:07:40.550154  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined IP address 192.168.39.59 and MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:40.550374  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHPort
	I0510 19:07:40.550572  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHKeyPath
	I0510 19:07:40.550729  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHUsername
	I0510 19:07:40.550858  431837 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/test-preload-513090/id_rsa Username:docker}
	I0510 19:07:40.636553  431837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0510 19:07:40.668320  431837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0510 19:07:40.698502  431837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0510 19:07:40.728071  431837 provision.go:87] duration metric: took 417.937862ms to configureAuth
	I0510 19:07:40.728114  431837 buildroot.go:189] setting minikube options for container-runtime
	I0510 19:07:40.728298  431837 config.go:182] Loaded profile config "test-preload-513090": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0510 19:07:40.728385  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHHostname
	I0510 19:07:40.731398  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:40.731765  431837 main.go:141] libmachine: (test-preload-513090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:7e", ip: ""} in network mk-test-preload-513090: {Iface:virbr1 ExpiryTime:2025-05-10 20:07:32 +0000 UTC Type:0 Mac:52:54:00:3c:07:7e Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:test-preload-513090 Clientid:01:52:54:00:3c:07:7e}
	I0510 19:07:40.731806  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined IP address 192.168.39.59 and MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:40.731975  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHPort
	I0510 19:07:40.732161  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHKeyPath
	I0510 19:07:40.732318  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHKeyPath
	I0510 19:07:40.732430  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHUsername
	I0510 19:07:40.732605  431837 main.go:141] libmachine: Using SSH client type: native
	I0510 19:07:40.732840  431837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I0510 19:07:40.732862  431837 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0510 19:07:40.975004  431837 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0510 19:07:40.975041  431837 machine.go:96] duration metric: took 1.030121554s to provisionDockerMachine
	I0510 19:07:40.975058  431837 start.go:293] postStartSetup for "test-preload-513090" (driver="kvm2")
	I0510 19:07:40.975076  431837 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0510 19:07:40.975106  431837 main.go:141] libmachine: (test-preload-513090) Calling .DriverName
	I0510 19:07:40.975452  431837 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0510 19:07:40.975495  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHHostname
	I0510 19:07:40.978631  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:40.979115  431837 main.go:141] libmachine: (test-preload-513090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:7e", ip: ""} in network mk-test-preload-513090: {Iface:virbr1 ExpiryTime:2025-05-10 20:07:32 +0000 UTC Type:0 Mac:52:54:00:3c:07:7e Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:test-preload-513090 Clientid:01:52:54:00:3c:07:7e}
	I0510 19:07:40.979147  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined IP address 192.168.39.59 and MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:40.979333  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHPort
	I0510 19:07:40.979556  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHKeyPath
	I0510 19:07:40.979726  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHUsername
	I0510 19:07:40.979904  431837 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/test-preload-513090/id_rsa Username:docker}
	I0510 19:07:41.065663  431837 ssh_runner.go:195] Run: cat /etc/os-release
	I0510 19:07:41.070875  431837 info.go:137] Remote host: Buildroot 2024.11.2
	I0510 19:07:41.070903  431837 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-388787/.minikube/addons for local assets ...
	I0510 19:07:41.070971  431837 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-388787/.minikube/files for local assets ...
	I0510 19:07:41.071050  431837 filesync.go:149] local asset: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem -> 3959802.pem in /etc/ssl/certs
	I0510 19:07:41.071180  431837 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0510 19:07:41.083733  431837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem --> /etc/ssl/certs/3959802.pem (1708 bytes)
	I0510 19:07:41.115788  431837 start.go:296] duration metric: took 140.713716ms for postStartSetup
	I0510 19:07:41.115835  431837 fix.go:56] duration metric: took 20.906921505s for fixHost
	I0510 19:07:41.115859  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHHostname
	I0510 19:07:41.119263  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:41.119732  431837 main.go:141] libmachine: (test-preload-513090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:7e", ip: ""} in network mk-test-preload-513090: {Iface:virbr1 ExpiryTime:2025-05-10 20:07:32 +0000 UTC Type:0 Mac:52:54:00:3c:07:7e Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:test-preload-513090 Clientid:01:52:54:00:3c:07:7e}
	I0510 19:07:41.119770  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined IP address 192.168.39.59 and MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:41.119992  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHPort
	I0510 19:07:41.120267  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHKeyPath
	I0510 19:07:41.120476  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHKeyPath
	I0510 19:07:41.120674  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHUsername
	I0510 19:07:41.120900  431837 main.go:141] libmachine: Using SSH client type: native
	I0510 19:07:41.121096  431837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I0510 19:07:41.121106  431837 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0510 19:07:41.229294  431837 main.go:141] libmachine: SSH cmd err, output: <nil>: 1746904061.198418935
	
	I0510 19:07:41.229323  431837 fix.go:216] guest clock: 1746904061.198418935
	I0510 19:07:41.229334  431837 fix.go:229] Guest: 2025-05-10 19:07:41.198418935 +0000 UTC Remote: 2025-05-10 19:07:41.115840104 +0000 UTC m=+25.286688411 (delta=82.578831ms)
	I0510 19:07:41.229360  431837 fix.go:200] guest clock delta is within tolerance: 82.578831ms
	I0510 19:07:41.229367  431837 start.go:83] releasing machines lock for "test-preload-513090", held for 21.020463555s
	I0510 19:07:41.229396  431837 main.go:141] libmachine: (test-preload-513090) Calling .DriverName
	I0510 19:07:41.229714  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetIP
	I0510 19:07:41.232646  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:41.233056  431837 main.go:141] libmachine: (test-preload-513090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:7e", ip: ""} in network mk-test-preload-513090: {Iface:virbr1 ExpiryTime:2025-05-10 20:07:32 +0000 UTC Type:0 Mac:52:54:00:3c:07:7e Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:test-preload-513090 Clientid:01:52:54:00:3c:07:7e}
	I0510 19:07:41.233086  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined IP address 192.168.39.59 and MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:41.233276  431837 main.go:141] libmachine: (test-preload-513090) Calling .DriverName
	I0510 19:07:41.233797  431837 main.go:141] libmachine: (test-preload-513090) Calling .DriverName
	I0510 19:07:41.233967  431837 main.go:141] libmachine: (test-preload-513090) Calling .DriverName
	I0510 19:07:41.234071  431837 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0510 19:07:41.234115  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHHostname
	I0510 19:07:41.234179  431837 ssh_runner.go:195] Run: cat /version.json
	I0510 19:07:41.234208  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHHostname
	I0510 19:07:41.237429  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:41.237502  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:41.237764  431837 main.go:141] libmachine: (test-preload-513090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:7e", ip: ""} in network mk-test-preload-513090: {Iface:virbr1 ExpiryTime:2025-05-10 20:07:32 +0000 UTC Type:0 Mac:52:54:00:3c:07:7e Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:test-preload-513090 Clientid:01:52:54:00:3c:07:7e}
	I0510 19:07:41.237805  431837 main.go:141] libmachine: (test-preload-513090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:7e", ip: ""} in network mk-test-preload-513090: {Iface:virbr1 ExpiryTime:2025-05-10 20:07:32 +0000 UTC Type:0 Mac:52:54:00:3c:07:7e Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:test-preload-513090 Clientid:01:52:54:00:3c:07:7e}
	I0510 19:07:41.237824  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined IP address 192.168.39.59 and MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:41.237840  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined IP address 192.168.39.59 and MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:41.238035  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHPort
	I0510 19:07:41.238052  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHPort
	I0510 19:07:41.238293  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHKeyPath
	I0510 19:07:41.238319  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHKeyPath
	I0510 19:07:41.238471  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHUsername
	I0510 19:07:41.238509  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHUsername
	I0510 19:07:41.238638  431837 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/test-preload-513090/id_rsa Username:docker}
	I0510 19:07:41.238638  431837 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/test-preload-513090/id_rsa Username:docker}
	I0510 19:07:41.317361  431837 ssh_runner.go:195] Run: systemctl --version
	I0510 19:07:41.342822  431837 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0510 19:07:41.492476  431837 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0510 19:07:41.499856  431837 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0510 19:07:41.499931  431837 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0510 19:07:41.521487  431837 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0510 19:07:41.521515  431837 start.go:495] detecting cgroup driver to use...
	I0510 19:07:41.521580  431837 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0510 19:07:41.541749  431837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0510 19:07:41.560354  431837 docker.go:225] disabling cri-docker service (if available) ...
	I0510 19:07:41.560440  431837 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0510 19:07:41.578211  431837 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0510 19:07:41.597125  431837 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0510 19:07:41.733895  431837 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0510 19:07:41.886256  431837 docker.go:241] disabling docker service ...
	I0510 19:07:41.886344  431837 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0510 19:07:41.905358  431837 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0510 19:07:41.921187  431837 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0510 19:07:42.108963  431837 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0510 19:07:42.255372  431837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0510 19:07:42.272685  431837 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0510 19:07:42.296658  431837 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0510 19:07:42.296724  431837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:07:42.309394  431837 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0510 19:07:42.309459  431837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:07:42.322474  431837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:07:42.334836  431837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:07:42.346948  431837 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0510 19:07:42.359866  431837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:07:42.371992  431837 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:07:42.393006  431837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:07:42.405423  431837 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0510 19:07:42.415845  431837 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0510 19:07:42.415926  431837 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0510 19:07:42.432348  431837 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0510 19:07:42.443726  431837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 19:07:42.578757  431837 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0510 19:07:42.694173  431837 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0510 19:07:42.694264  431837 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0510 19:07:42.699922  431837 start.go:563] Will wait 60s for crictl version
	I0510 19:07:42.700011  431837 ssh_runner.go:195] Run: which crictl
	I0510 19:07:42.704857  431837 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0510 19:07:42.751567  431837 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0510 19:07:42.751659  431837 ssh_runner.go:195] Run: crio --version
	I0510 19:07:42.782681  431837 ssh_runner.go:195] Run: crio --version
	I0510 19:07:42.814530  431837 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0510 19:07:42.815770  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetIP
	I0510 19:07:42.818681  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:42.819176  431837 main.go:141] libmachine: (test-preload-513090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:7e", ip: ""} in network mk-test-preload-513090: {Iface:virbr1 ExpiryTime:2025-05-10 20:07:32 +0000 UTC Type:0 Mac:52:54:00:3c:07:7e Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:test-preload-513090 Clientid:01:52:54:00:3c:07:7e}
	I0510 19:07:42.819202  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined IP address 192.168.39.59 and MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:07:42.819490  431837 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0510 19:07:42.824417  431837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 19:07:42.839527  431837 kubeadm.go:875] updating cluster {Name:test-preload-513090 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-513090 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.59 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0510 19:07:42.839638  431837 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0510 19:07:42.839692  431837 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 19:07:42.879720  431837 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0510 19:07:42.879784  431837 ssh_runner.go:195] Run: which lz4
	I0510 19:07:42.884451  431837 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0510 19:07:42.889217  431837 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0510 19:07:42.889250  431837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0510 19:07:44.653089  431837 crio.go:462] duration metric: took 1.768680353s to copy over tarball
	I0510 19:07:44.653173  431837 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0510 19:07:46.912797  431837 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.259580071s)
	I0510 19:07:46.912838  431837 crio.go:469] duration metric: took 2.259713518s to extract the tarball
	I0510 19:07:46.912847  431837 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0510 19:07:46.954579  431837 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 19:07:47.002131  431837 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0510 19:07:47.002163  431837 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0510 19:07:47.002248  431837 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0510 19:07:47.002307  431837 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0510 19:07:47.002373  431837 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0510 19:07:47.002400  431837 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0510 19:07:47.002314  431837 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0510 19:07:47.002455  431837 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0510 19:07:47.002339  431837 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0510 19:07:47.002355  431837 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0510 19:07:47.003898  431837 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0510 19:07:47.003908  431837 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0510 19:07:47.003937  431837 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0510 19:07:47.003896  431837 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0510 19:07:47.003908  431837 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0510 19:07:47.003904  431837 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0510 19:07:47.003896  431837 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0510 19:07:47.003987  431837 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0510 19:07:47.143362  431837 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0510 19:07:47.145133  431837 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0510 19:07:47.149400  431837 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0510 19:07:47.150710  431837 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0510 19:07:47.151115  431837 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0510 19:07:47.164866  431837 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0510 19:07:47.177816  431837 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0510 19:07:47.256704  431837 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0510 19:07:47.256773  431837 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0510 19:07:47.256841  431837 ssh_runner.go:195] Run: which crictl
	I0510 19:07:47.349601  431837 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0510 19:07:47.349641  431837 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0510 19:07:47.349654  431837 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0510 19:07:47.349677  431837 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0510 19:07:47.349713  431837 ssh_runner.go:195] Run: which crictl
	I0510 19:07:47.349765  431837 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0510 19:07:47.349797  431837 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0510 19:07:47.349816  431837 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0510 19:07:47.349827  431837 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0510 19:07:47.349850  431837 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0510 19:07:47.349848  431837 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0510 19:07:47.349715  431837 ssh_runner.go:195] Run: which crictl
	I0510 19:07:47.349874  431837 ssh_runner.go:195] Run: which crictl
	I0510 19:07:47.349875  431837 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0510 19:07:47.349885  431837 ssh_runner.go:195] Run: which crictl
	I0510 19:07:47.349897  431837 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0510 19:07:47.349946  431837 ssh_runner.go:195] Run: which crictl
	I0510 19:07:47.349852  431837 ssh_runner.go:195] Run: which crictl
	I0510 19:07:47.349901  431837 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0510 19:07:47.359646  431837 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0510 19:07:47.403613  431837 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0510 19:07:47.403658  431837 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0510 19:07:47.403709  431837 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0510 19:07:47.403744  431837 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0510 19:07:47.403797  431837 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0510 19:07:47.403820  431837 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0510 19:07:47.433796  431837 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0510 19:07:47.554182  431837 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0510 19:07:47.563288  431837 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0510 19:07:47.599595  431837 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0510 19:07:47.599744  431837 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0510 19:07:47.599784  431837 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0510 19:07:47.599812  431837 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0510 19:07:47.599907  431837 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0510 19:07:47.743037  431837 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0510 19:07:47.743038  431837 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0510 19:07:47.743146  431837 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0510 19:07:47.768219  431837 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0510 19:07:47.768347  431837 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0510 19:07:47.768350  431837 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0510 19:07:47.768439  431837 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0510 19:07:47.768528  431837 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0510 19:07:47.768707  431837 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0510 19:07:47.846033  431837 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0510 19:07:47.846108  431837 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0510 19:07:47.846132  431837 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0510 19:07:47.846176  431837 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0510 19:07:47.846191  431837 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0510 19:07:47.900120  431837 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0510 19:07:47.900257  431837 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0510 19:07:47.915330  431837 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0510 19:07:47.915391  431837 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0510 19:07:47.915456  431837 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0510 19:07:47.915477  431837 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0510 19:07:47.915459  431837 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0510 19:07:47.915499  431837 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0510 19:07:47.915540  431837 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0510 19:07:48.086320  431837 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0510 19:07:50.663776  431837 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6: (2.817558222s)
	I0510 19:07:50.663814  431837 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0510 19:07:50.663845  431837 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0510 19:07:50.663882  431837 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: (2.817655474s)
	I0510 19:07:50.663943  431837 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4: (2.763657558s)
	I0510 19:07:50.663989  431837 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0510 19:07:50.663951  431837 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0510 19:07:50.663994  431837 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4: (2.748438291s)
	I0510 19:07:50.664013  431837 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0510 19:07:50.663898  431837 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0510 19:07:50.664056  431837 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4: (2.748558952s)
	I0510 19:07:50.664082  431837 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0510 19:07:50.664103  431837 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: (2.748586857s)
	I0510 19:07:50.664121  431837 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0510 19:07:50.664130  431837 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.577784798s)
	I0510 19:07:51.415471  431837 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0510 19:07:51.415519  431837 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0510 19:07:51.415578  431837 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0510 19:07:52.162999  431837 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0510 19:07:52.163048  431837 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0510 19:07:52.163107  431837 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0510 19:07:54.316679  431837 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.153548396s)
	I0510 19:07:54.316722  431837 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0510 19:07:54.316756  431837 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0510 19:07:54.316824  431837 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0510 19:07:54.765536  431837 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0510 19:07:54.765590  431837 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0510 19:07:54.765650  431837 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0510 19:07:55.611996  431837 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0510 19:07:55.612064  431837 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0510 19:07:55.612127  431837 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0510 19:07:55.764608  431837 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0510 19:07:55.764676  431837 cache_images.go:123] Successfully loaded all cached images
	I0510 19:07:55.764685  431837 cache_images.go:92] duration metric: took 8.762503659s to LoadCachedImages
	I0510 19:07:55.764701  431837 kubeadm.go:926] updating node { 192.168.39.59 8443 v1.24.4 crio true true} ...
	I0510 19:07:55.764884  431837 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-513090 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.59
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-513090 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0510 19:07:55.764982  431837 ssh_runner.go:195] Run: crio config
	I0510 19:07:55.821064  431837 cni.go:84] Creating CNI manager for ""
	I0510 19:07:55.821090  431837 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0510 19:07:55.821104  431837 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0510 19:07:55.821123  431837 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.59 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-513090 NodeName:test-preload-513090 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.59"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.59 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0510 19:07:55.821259  431837 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.59
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-513090"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.59
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.59"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0510 19:07:55.821329  431837 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0510 19:07:55.834772  431837 binaries.go:44] Found k8s binaries, skipping transfer
	I0510 19:07:55.834850  431837 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0510 19:07:55.847040  431837 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0510 19:07:55.868760  431837 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0510 19:07:55.890670  431837 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0510 19:07:55.913298  431837 ssh_runner.go:195] Run: grep 192.168.39.59	control-plane.minikube.internal$ /etc/hosts
	I0510 19:07:55.917807  431837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.59	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 19:07:55.933607  431837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 19:07:56.069393  431837 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 19:07:56.099655  431837 certs.go:68] Setting up /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/test-preload-513090 for IP: 192.168.39.59
	I0510 19:07:56.099692  431837 certs.go:194] generating shared ca certs ...
	I0510 19:07:56.099709  431837 certs.go:226] acquiring lock for ca certs: {Name:mk8db74782205da4ac57ef815dd495cda255251a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:07:56.099889  431837 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20720-388787/.minikube/ca.key
	I0510 19:07:56.099950  431837 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.key
	I0510 19:07:56.099965  431837 certs.go:256] generating profile certs ...
	I0510 19:07:56.100076  431837 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/test-preload-513090/client.key
	I0510 19:07:56.100153  431837 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/test-preload-513090/apiserver.key.35d42bff
	I0510 19:07:56.100203  431837 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/test-preload-513090/proxy-client.key
	I0510 19:07:56.100397  431837 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980.pem (1338 bytes)
	W0510 19:07:56.100435  431837 certs.go:480] ignoring /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980_empty.pem, impossibly tiny 0 bytes
	I0510 19:07:56.100444  431837 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem (1679 bytes)
	I0510 19:07:56.100483  431837 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem (1078 bytes)
	I0510 19:07:56.100514  431837 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem (1123 bytes)
	I0510 19:07:56.100542  431837 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem (1675 bytes)
	I0510 19:07:56.100598  431837 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem (1708 bytes)
	I0510 19:07:56.101446  431837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0510 19:07:56.137906  431837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0510 19:07:56.172645  431837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0510 19:07:56.207147  431837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0510 19:07:56.254703  431837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/test-preload-513090/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0510 19:07:56.288974  431837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/test-preload-513090/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0510 19:07:56.330520  431837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/test-preload-513090/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0510 19:07:56.376269  431837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/test-preload-513090/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0510 19:07:56.410555  431837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980.pem --> /usr/share/ca-certificates/395980.pem (1338 bytes)
	I0510 19:07:56.442990  431837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem --> /usr/share/ca-certificates/3959802.pem (1708 bytes)
	I0510 19:07:56.473704  431837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0510 19:07:56.505049  431837 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0510 19:07:56.527067  431837 ssh_runner.go:195] Run: openssl version
	I0510 19:07:56.533682  431837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/395980.pem && ln -fs /usr/share/ca-certificates/395980.pem /etc/ssl/certs/395980.pem"
	I0510 19:07:56.547103  431837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/395980.pem
	I0510 19:07:56.552636  431837 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 10 18:00 /usr/share/ca-certificates/395980.pem
	I0510 19:07:56.552717  431837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/395980.pem
	I0510 19:07:56.560027  431837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/395980.pem /etc/ssl/certs/51391683.0"
	I0510 19:07:56.573588  431837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3959802.pem && ln -fs /usr/share/ca-certificates/3959802.pem /etc/ssl/certs/3959802.pem"
	I0510 19:07:56.587453  431837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3959802.pem
	I0510 19:07:56.592959  431837 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 10 18:00 /usr/share/ca-certificates/3959802.pem
	I0510 19:07:56.593038  431837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3959802.pem
	I0510 19:07:56.600744  431837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3959802.pem /etc/ssl/certs/3ec20f2e.0"
	I0510 19:07:56.614398  431837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0510 19:07:56.629203  431837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0510 19:07:56.634626  431837 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 10 17:52 /usr/share/ca-certificates/minikubeCA.pem
	I0510 19:07:56.634688  431837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0510 19:07:56.642170  431837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0510 19:07:56.655192  431837 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0510 19:07:56.660449  431837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0510 19:07:56.667935  431837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0510 19:07:56.675989  431837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0510 19:07:56.683662  431837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0510 19:07:56.690991  431837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0510 19:07:56.698180  431837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0510 19:07:56.705396  431837 kubeadm.go:392] StartCluster: {Name:test-preload-513090 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-513090 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.59 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 19:07:56.705478  431837 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0510 19:07:56.705523  431837 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0510 19:07:56.746235  431837 cri.go:89] found id: ""
	I0510 19:07:56.746316  431837 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0510 19:07:56.758548  431837 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0510 19:07:56.758572  431837 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0510 19:07:56.758628  431837 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0510 19:07:56.770157  431837 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0510 19:07:56.770694  431837 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-513090" does not appear in /home/jenkins/minikube-integration/20720-388787/kubeconfig
	I0510 19:07:56.770836  431837 kubeconfig.go:62] /home/jenkins/minikube-integration/20720-388787/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-513090" cluster setting kubeconfig missing "test-preload-513090" context setting]
	I0510 19:07:56.771350  431837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/kubeconfig: {Name:mk5ad7285fe4c17b2779ea6d5a539f101fe94797 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:07:56.772054  431837 kapi.go:59] client config for test-preload-513090: &rest.Config{Host:"https://192.168.39.59:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20720-388787/.minikube/profiles/test-preload-513090/client.crt", KeyFile:"/home/jenkins/minikube-integration/20720-388787/.minikube/profiles/test-preload-513090/client.key", CAFile:"/home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24b3a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0510 19:07:56.772666  431837 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0510 19:07:56.772688  431837 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0510 19:07:56.772693  431837 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0510 19:07:56.772697  431837 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0510 19:07:56.773199  431837 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0510 19:07:56.784512  431837 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.39.59
	I0510 19:07:56.784558  431837 kubeadm.go:1152] stopping kube-system containers ...
	I0510 19:07:56.784574  431837 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0510 19:07:56.784641  431837 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0510 19:07:56.826362  431837 cri.go:89] found id: ""
	I0510 19:07:56.826476  431837 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0510 19:07:56.845220  431837 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0510 19:07:56.857790  431837 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0510 19:07:56.857825  431837 kubeadm.go:157] found existing configuration files:
	
	I0510 19:07:56.857900  431837 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0510 19:07:56.869380  431837 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0510 19:07:56.869458  431837 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0510 19:07:56.881241  431837 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0510 19:07:56.892482  431837 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0510 19:07:56.892574  431837 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0510 19:07:56.904573  431837 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0510 19:07:56.916162  431837 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0510 19:07:56.916235  431837 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0510 19:07:56.928258  431837 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0510 19:07:56.939178  431837 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0510 19:07:56.939269  431837 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0510 19:07:56.950679  431837 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0510 19:07:56.963085  431837 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:07:57.064422  431837 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:07:58.284794  431837 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.220333454s)
	I0510 19:07:58.284840  431837 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:07:58.585379  431837 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:07:58.662257  431837 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:07:58.737601  431837 api_server.go:52] waiting for apiserver process to appear ...
	I0510 19:07:58.737703  431837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:07:59.237881  431837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:07:59.738255  431837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:07:59.764033  431837 api_server.go:72] duration metric: took 1.026432877s to wait for apiserver process to appear ...
	I0510 19:07:59.764070  431837 api_server.go:88] waiting for apiserver healthz status ...
	I0510 19:07:59.764100  431837 api_server.go:253] Checking apiserver healthz at https://192.168.39.59:8443/healthz ...
	I0510 19:07:59.764655  431837 api_server.go:269] stopped: https://192.168.39.59:8443/healthz: Get "https://192.168.39.59:8443/healthz": dial tcp 192.168.39.59:8443: connect: connection refused
	I0510 19:08:00.264878  431837 api_server.go:253] Checking apiserver healthz at https://192.168.39.59:8443/healthz ...
	I0510 19:08:04.054276  431837 api_server.go:279] https://192.168.39.59:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0510 19:08:04.054311  431837 api_server.go:103] status: https://192.168.39.59:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0510 19:08:04.054335  431837 api_server.go:253] Checking apiserver healthz at https://192.168.39.59:8443/healthz ...
	I0510 19:08:04.075624  431837 api_server.go:279] https://192.168.39.59:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0510 19:08:04.075659  431837 api_server.go:103] status: https://192.168.39.59:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0510 19:08:04.265118  431837 api_server.go:253] Checking apiserver healthz at https://192.168.39.59:8443/healthz ...
	I0510 19:08:04.271404  431837 api_server.go:279] https://192.168.39.59:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0510 19:08:04.271435  431837 api_server.go:103] status: https://192.168.39.59:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0510 19:08:04.765113  431837 api_server.go:253] Checking apiserver healthz at https://192.168.39.59:8443/healthz ...
	I0510 19:08:04.773145  431837 api_server.go:279] https://192.168.39.59:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0510 19:08:04.773180  431837 api_server.go:103] status: https://192.168.39.59:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0510 19:08:05.264242  431837 api_server.go:253] Checking apiserver healthz at https://192.168.39.59:8443/healthz ...
	I0510 19:08:05.269544  431837 api_server.go:279] https://192.168.39.59:8443/healthz returned 200:
	ok
	I0510 19:08:05.277047  431837 api_server.go:141] control plane version: v1.24.4
	I0510 19:08:05.277082  431837 api_server.go:131] duration metric: took 5.51300316s to wait for apiserver health ...
	I0510 19:08:05.277093  431837 cni.go:84] Creating CNI manager for ""
	I0510 19:08:05.277099  431837 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0510 19:08:05.279070  431837 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0510 19:08:05.280527  431837 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0510 19:08:05.294403  431837 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0510 19:08:05.317723  431837 system_pods.go:43] waiting for kube-system pods to appear ...
	I0510 19:08:05.321862  431837 system_pods.go:59] 7 kube-system pods found
	I0510 19:08:05.321913  431837 system_pods.go:61] "coredns-6d4b75cb6d-ql6bp" [b6f78565-ea70-4741-929d-1b4623a50d49] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0510 19:08:05.321922  431837 system_pods.go:61] "etcd-test-preload-513090" [ab0bb7dd-4587-4cf4-8095-60cc8be7290a] Running
	I0510 19:08:05.321930  431837 system_pods.go:61] "kube-apiserver-test-preload-513090" [4d63d7c9-78d6-4084-ad81-0ceda2f0fd7b] Running
	I0510 19:08:05.321935  431837 system_pods.go:61] "kube-controller-manager-test-preload-513090" [7741a8f5-d97b-42c8-8177-6a4b236897b7] Running
	I0510 19:08:05.321945  431837 system_pods.go:61] "kube-proxy-twmh6" [d0e857fd-30e9-4499-be28-e3dff6a133ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0510 19:08:05.321955  431837 system_pods.go:61] "kube-scheduler-test-preload-513090" [25855a32-1c73-4418-a710-7e3f71889264] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0510 19:08:05.321965  431837 system_pods.go:61] "storage-provisioner" [93838a50-0cc3-47dd-86d3-de7b6fb5f926] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0510 19:08:05.321974  431837 system_pods.go:74] duration metric: took 4.225002ms to wait for pod list to return data ...
	I0510 19:08:05.321994  431837 node_conditions.go:102] verifying NodePressure condition ...
	I0510 19:08:05.324686  431837 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0510 19:08:05.324717  431837 node_conditions.go:123] node cpu capacity is 2
	I0510 19:08:05.324730  431837 node_conditions.go:105] duration metric: took 2.729862ms to run NodePressure ...
	I0510 19:08:05.324748  431837 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:08:05.573187  431837 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0510 19:08:05.576915  431837 kubeadm.go:735] kubelet initialised
	I0510 19:08:05.576945  431837 kubeadm.go:736] duration metric: took 3.727236ms waiting for restarted kubelet to initialise ...
	I0510 19:08:05.576967  431837 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0510 19:08:05.593531  431837 ops.go:34] apiserver oom_adj: -16
	I0510 19:08:05.593556  431837 kubeadm.go:593] duration metric: took 8.83497755s to restartPrimaryControlPlane
	I0510 19:08:05.593567  431837 kubeadm.go:394] duration metric: took 8.888187742s to StartCluster
	I0510 19:08:05.593585  431837 settings.go:142] acquiring lock: {Name:mk4ab6a112c947bfdedd8044017a7c560266fb5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:08:05.593699  431837 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20720-388787/kubeconfig
	I0510 19:08:05.594350  431837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/kubeconfig: {Name:mk5ad7285fe4c17b2779ea6d5a539f101fe94797 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:08:05.594595  431837 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.59 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0510 19:08:05.594798  431837 config.go:182] Loaded profile config "test-preload-513090": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0510 19:08:05.594745  431837 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0510 19:08:05.594854  431837 addons.go:69] Setting storage-provisioner=true in profile "test-preload-513090"
	I0510 19:08:05.594860  431837 addons.go:69] Setting default-storageclass=true in profile "test-preload-513090"
	I0510 19:08:05.594879  431837 addons.go:238] Setting addon storage-provisioner=true in "test-preload-513090"
	W0510 19:08:05.594893  431837 addons.go:247] addon storage-provisioner should already be in state true
	I0510 19:08:05.594908  431837 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-513090"
	I0510 19:08:05.594925  431837 host.go:66] Checking if "test-preload-513090" exists ...
	I0510 19:08:05.595309  431837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:08:05.595358  431837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:08:05.595433  431837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:08:05.595469  431837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:08:05.596343  431837 out.go:177] * Verifying Kubernetes components...
	I0510 19:08:05.598282  431837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 19:08:05.611292  431837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34821
	I0510 19:08:05.611815  431837 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:08:05.612291  431837 main.go:141] libmachine: Using API Version  1
	I0510 19:08:05.612316  431837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:08:05.612671  431837 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:08:05.612867  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetState
	I0510 19:08:05.615339  431837 kapi.go:59] client config for test-preload-513090: &rest.Config{Host:"https://192.168.39.59:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20720-388787/.minikube/profiles/test-preload-513090/client.crt", KeyFile:"/home/jenkins/minikube-integration/20720-388787/.minikube/profiles/test-preload-513090/client.key", CAFile:"/home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24b3a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0510 19:08:05.615699  431837 addons.go:238] Setting addon default-storageclass=true in "test-preload-513090"
	W0510 19:08:05.615719  431837 addons.go:247] addon default-storageclass should already be in state true
	I0510 19:08:05.615749  431837 host.go:66] Checking if "test-preload-513090" exists ...
	I0510 19:08:05.615968  431837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42569
	I0510 19:08:05.616101  431837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:08:05.616149  431837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:08:05.616463  431837 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:08:05.617023  431837 main.go:141] libmachine: Using API Version  1
	I0510 19:08:05.617040  431837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:08:05.617416  431837 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:08:05.618034  431837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:08:05.618085  431837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:08:05.632757  431837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41883
	I0510 19:08:05.632885  431837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44669
	I0510 19:08:05.633335  431837 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:08:05.633341  431837 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:08:05.633859  431837 main.go:141] libmachine: Using API Version  1
	I0510 19:08:05.633874  431837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:08:05.633919  431837 main.go:141] libmachine: Using API Version  1
	I0510 19:08:05.633934  431837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:08:05.634230  431837 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:08:05.634313  431837 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:08:05.634475  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetState
	I0510 19:08:05.634931  431837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:08:05.634998  431837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:08:05.636220  431837 main.go:141] libmachine: (test-preload-513090) Calling .DriverName
	I0510 19:08:05.638668  431837 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0510 19:08:05.640678  431837 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0510 19:08:05.640706  431837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0510 19:08:05.640730  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHHostname
	I0510 19:08:05.644627  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:08:05.645081  431837 main.go:141] libmachine: (test-preload-513090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:7e", ip: ""} in network mk-test-preload-513090: {Iface:virbr1 ExpiryTime:2025-05-10 20:07:32 +0000 UTC Type:0 Mac:52:54:00:3c:07:7e Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:test-preload-513090 Clientid:01:52:54:00:3c:07:7e}
	I0510 19:08:05.645111  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined IP address 192.168.39.59 and MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:08:05.645345  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHPort
	I0510 19:08:05.645510  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHKeyPath
	I0510 19:08:05.645646  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHUsername
	I0510 19:08:05.645730  431837 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/test-preload-513090/id_rsa Username:docker}
	I0510 19:08:05.667497  431837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35761
	I0510 19:08:05.668099  431837 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:08:05.668635  431837 main.go:141] libmachine: Using API Version  1
	I0510 19:08:05.668668  431837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:08:05.669076  431837 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:08:05.669283  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetState
	I0510 19:08:05.671034  431837 main.go:141] libmachine: (test-preload-513090) Calling .DriverName
	I0510 19:08:05.671288  431837 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0510 19:08:05.671306  431837 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0510 19:08:05.671329  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHHostname
	I0510 19:08:05.675265  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:08:05.675607  431837 main.go:141] libmachine: (test-preload-513090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:7e", ip: ""} in network mk-test-preload-513090: {Iface:virbr1 ExpiryTime:2025-05-10 20:07:32 +0000 UTC Type:0 Mac:52:54:00:3c:07:7e Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:test-preload-513090 Clientid:01:52:54:00:3c:07:7e}
	I0510 19:08:05.675639  431837 main.go:141] libmachine: (test-preload-513090) DBG | domain test-preload-513090 has defined IP address 192.168.39.59 and MAC address 52:54:00:3c:07:7e in network mk-test-preload-513090
	I0510 19:08:05.675824  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHPort
	I0510 19:08:05.676073  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHKeyPath
	I0510 19:08:05.676294  431837 main.go:141] libmachine: (test-preload-513090) Calling .GetSSHUsername
	I0510 19:08:05.676431  431837 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/test-preload-513090/id_rsa Username:docker}
	I0510 19:08:05.857096  431837 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 19:08:05.886657  431837 node_ready.go:35] waiting up to 6m0s for node "test-preload-513090" to be "Ready" ...
	I0510 19:08:05.956757  431837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0510 19:08:06.005448  431837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0510 19:08:07.265060  431837 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.308263144s)
	I0510 19:08:07.265132  431837 main.go:141] libmachine: Making call to close driver server
	I0510 19:08:07.265148  431837 main.go:141] libmachine: (test-preload-513090) Calling .Close
	I0510 19:08:07.265487  431837 main.go:141] libmachine: Successfully made call to close driver server
	I0510 19:08:07.265508  431837 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 19:08:07.265524  431837 main.go:141] libmachine: Making call to close driver server
	I0510 19:08:07.265534  431837 main.go:141] libmachine: (test-preload-513090) Calling .Close
	I0510 19:08:07.265758  431837 main.go:141] libmachine: Successfully made call to close driver server
	I0510 19:08:07.265774  431837 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 19:08:07.265797  431837 main.go:141] libmachine: (test-preload-513090) DBG | Closing plugin on server side
	I0510 19:08:07.272515  431837 main.go:141] libmachine: Making call to close driver server
	I0510 19:08:07.272537  431837 main.go:141] libmachine: (test-preload-513090) Calling .Close
	I0510 19:08:07.272911  431837 main.go:141] libmachine: Successfully made call to close driver server
	I0510 19:08:07.272932  431837 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 19:08:07.272956  431837 main.go:141] libmachine: (test-preload-513090) DBG | Closing plugin on server side
	I0510 19:08:07.299307  431837 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.293817346s)
	I0510 19:08:07.299386  431837 main.go:141] libmachine: Making call to close driver server
	I0510 19:08:07.299401  431837 main.go:141] libmachine: (test-preload-513090) Calling .Close
	I0510 19:08:07.299740  431837 main.go:141] libmachine: Successfully made call to close driver server
	I0510 19:08:07.299763  431837 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 19:08:07.299773  431837 main.go:141] libmachine: Making call to close driver server
	I0510 19:08:07.299779  431837 main.go:141] libmachine: (test-preload-513090) Calling .Close
	I0510 19:08:07.299790  431837 main.go:141] libmachine: (test-preload-513090) DBG | Closing plugin on server side
	I0510 19:08:07.300001  431837 main.go:141] libmachine: Successfully made call to close driver server
	I0510 19:08:07.300017  431837 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 19:08:07.301913  431837 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0510 19:08:07.303294  431837 addons.go:514] duration metric: took 1.708571017s for enable addons: enabled=[default-storageclass storage-provisioner]
	W0510 19:08:07.890750  431837 node_ready.go:57] node "test-preload-513090" has "Ready":"False" status (will retry)
	W0510 19:08:10.391974  431837 node_ready.go:57] node "test-preload-513090" has "Ready":"False" status (will retry)
	W0510 19:08:12.891169  431837 node_ready.go:57] node "test-preload-513090" has "Ready":"False" status (will retry)
	I0510 19:08:14.890423  431837 node_ready.go:49] node "test-preload-513090" is "Ready"
	I0510 19:08:14.890461  431837 node_ready.go:38] duration metric: took 9.003750419s for node "test-preload-513090" to be "Ready" ...
	I0510 19:08:14.890482  431837 api_server.go:52] waiting for apiserver process to appear ...
	I0510 19:08:14.890545  431837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:08:14.916479  431837 api_server.go:72] duration metric: took 9.321841532s to wait for apiserver process to appear ...
	I0510 19:08:14.916510  431837 api_server.go:88] waiting for apiserver healthz status ...
	I0510 19:08:14.916528  431837 api_server.go:253] Checking apiserver healthz at https://192.168.39.59:8443/healthz ...
	I0510 19:08:14.924001  431837 api_server.go:279] https://192.168.39.59:8443/healthz returned 200:
	ok
	I0510 19:08:14.925074  431837 api_server.go:141] control plane version: v1.24.4
	I0510 19:08:14.925100  431837 api_server.go:131] duration metric: took 8.582856ms to wait for apiserver health ...
	I0510 19:08:14.925108  431837 system_pods.go:43] waiting for kube-system pods to appear ...
	I0510 19:08:14.928460  431837 system_pods.go:59] 7 kube-system pods found
	I0510 19:08:14.928490  431837 system_pods.go:61] "coredns-6d4b75cb6d-ql6bp" [b6f78565-ea70-4741-929d-1b4623a50d49] Running
	I0510 19:08:14.928495  431837 system_pods.go:61] "etcd-test-preload-513090" [ab0bb7dd-4587-4cf4-8095-60cc8be7290a] Running
	I0510 19:08:14.928503  431837 system_pods.go:61] "kube-apiserver-test-preload-513090" [4d63d7c9-78d6-4084-ad81-0ceda2f0fd7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0510 19:08:14.928509  431837 system_pods.go:61] "kube-controller-manager-test-preload-513090" [7741a8f5-d97b-42c8-8177-6a4b236897b7] Running
	I0510 19:08:14.928516  431837 system_pods.go:61] "kube-proxy-twmh6" [d0e857fd-30e9-4499-be28-e3dff6a133ca] Running
	I0510 19:08:14.928521  431837 system_pods.go:61] "kube-scheduler-test-preload-513090" [25855a32-1c73-4418-a710-7e3f71889264] Running
	I0510 19:08:14.928525  431837 system_pods.go:61] "storage-provisioner" [93838a50-0cc3-47dd-86d3-de7b6fb5f926] Running
	I0510 19:08:14.928534  431837 system_pods.go:74] duration metric: took 3.418386ms to wait for pod list to return data ...
	I0510 19:08:14.928543  431837 default_sa.go:34] waiting for default service account to be created ...
	I0510 19:08:14.930434  431837 default_sa.go:45] found service account: "default"
	I0510 19:08:14.930456  431837 default_sa.go:55] duration metric: took 1.906871ms for default service account to be created ...
	I0510 19:08:14.930464  431837 system_pods.go:116] waiting for k8s-apps to be running ...
	I0510 19:08:14.933831  431837 system_pods.go:86] 7 kube-system pods found
	I0510 19:08:14.933855  431837 system_pods.go:89] "coredns-6d4b75cb6d-ql6bp" [b6f78565-ea70-4741-929d-1b4623a50d49] Running
	I0510 19:08:14.933868  431837 system_pods.go:89] "etcd-test-preload-513090" [ab0bb7dd-4587-4cf4-8095-60cc8be7290a] Running
	I0510 19:08:14.933879  431837 system_pods.go:89] "kube-apiserver-test-preload-513090" [4d63d7c9-78d6-4084-ad81-0ceda2f0fd7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0510 19:08:14.933886  431837 system_pods.go:89] "kube-controller-manager-test-preload-513090" [7741a8f5-d97b-42c8-8177-6a4b236897b7] Running
	I0510 19:08:14.933892  431837 system_pods.go:89] "kube-proxy-twmh6" [d0e857fd-30e9-4499-be28-e3dff6a133ca] Running
	I0510 19:08:14.933895  431837 system_pods.go:89] "kube-scheduler-test-preload-513090" [25855a32-1c73-4418-a710-7e3f71889264] Running
	I0510 19:08:14.933900  431837 system_pods.go:89] "storage-provisioner" [93838a50-0cc3-47dd-86d3-de7b6fb5f926] Running
	I0510 19:08:14.933909  431837 system_pods.go:126] duration metric: took 3.438803ms to wait for k8s-apps to be running ...
	I0510 19:08:14.933918  431837 system_svc.go:44] waiting for kubelet service to be running ....
	I0510 19:08:14.933971  431837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0510 19:08:14.953940  431837 system_svc.go:56] duration metric: took 20.008909ms WaitForService to wait for kubelet
	I0510 19:08:14.953976  431837 kubeadm.go:578] duration metric: took 9.359344091s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0510 19:08:14.954000  431837 node_conditions.go:102] verifying NodePressure condition ...
	I0510 19:08:14.957162  431837 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0510 19:08:14.957188  431837 node_conditions.go:123] node cpu capacity is 2
	I0510 19:08:14.957200  431837 node_conditions.go:105] duration metric: took 3.195285ms to run NodePressure ...
	I0510 19:08:14.957213  431837 start.go:241] waiting for startup goroutines ...
	I0510 19:08:14.957220  431837 start.go:246] waiting for cluster config update ...
	I0510 19:08:14.957231  431837 start.go:255] writing updated cluster config ...
	I0510 19:08:14.957500  431837 ssh_runner.go:195] Run: rm -f paused
	I0510 19:08:14.962870  431837 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0510 19:08:14.963388  431837 kapi.go:59] client config for test-preload-513090: &rest.Config{Host:"https://192.168.39.59:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20720-388787/.minikube/profiles/test-preload-513090/client.crt", KeyFile:"/home/jenkins/minikube-integration/20720-388787/.minikube/profiles/test-preload-513090/client.key", CAFile:"/home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24b3a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0510 19:08:14.967128  431837 pod_ready.go:83] waiting for pod "coredns-6d4b75cb6d-ql6bp" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:08:14.972048  431837 pod_ready.go:94] pod "coredns-6d4b75cb6d-ql6bp" is "Ready"
	I0510 19:08:14.972072  431837 pod_ready.go:86] duration metric: took 4.906132ms for pod "coredns-6d4b75cb6d-ql6bp" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:08:14.974727  431837 pod_ready.go:83] waiting for pod "etcd-test-preload-513090" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:08:14.979964  431837 pod_ready.go:94] pod "etcd-test-preload-513090" is "Ready"
	I0510 19:08:14.979999  431837 pod_ready.go:86] duration metric: took 5.250518ms for pod "etcd-test-preload-513090" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:08:14.983760  431837 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-513090" in "kube-system" namespace to be "Ready" or be gone ...
	W0510 19:08:16.991577  431837 pod_ready.go:104] pod "kube-apiserver-test-preload-513090" is not "Ready", error: <nil>
	I0510 19:08:18.991695  431837 pod_ready.go:94] pod "kube-apiserver-test-preload-513090" is "Ready"
	I0510 19:08:18.991727  431837 pod_ready.go:86] duration metric: took 4.007936513s for pod "kube-apiserver-test-preload-513090" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:08:18.996352  431837 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-513090" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:08:19.002394  431837 pod_ready.go:94] pod "kube-controller-manager-test-preload-513090" is "Ready"
	I0510 19:08:19.002425  431837 pod_ready.go:86] duration metric: took 6.041971ms for pod "kube-controller-manager-test-preload-513090" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:08:19.006128  431837 pod_ready.go:83] waiting for pod "kube-proxy-twmh6" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:08:19.168318  431837 pod_ready.go:94] pod "kube-proxy-twmh6" is "Ready"
	I0510 19:08:19.168345  431837 pod_ready.go:86] duration metric: took 162.190147ms for pod "kube-proxy-twmh6" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:08:19.368669  431837 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-513090" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:08:19.767767  431837 pod_ready.go:94] pod "kube-scheduler-test-preload-513090" is "Ready"
	I0510 19:08:19.767802  431837 pod_ready.go:86] duration metric: took 399.105236ms for pod "kube-scheduler-test-preload-513090" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:08:19.767813  431837 pod_ready.go:40] duration metric: took 4.804911133s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0510 19:08:19.811563  431837 start.go:607] kubectl: 1.33.0, cluster: 1.24.4 (minor skew: 9)
	I0510 19:08:19.813319  431837 out.go:201] 
	W0510 19:08:19.814713  431837 out.go:270] ! /usr/local/bin/kubectl is version 1.33.0, which may have incompatibilities with Kubernetes 1.24.4.
	I0510 19:08:19.815966  431837 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0510 19:08:19.817191  431837 out.go:177] * Done! kubectl is now configured to use "test-preload-513090" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 10 19:08:20 test-preload-513090 crio[857]: time="2025-05-10 19:08:20.870268518Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1408a8e3-c34e-4bf0-9f7e-9964e485247d name=/runtime.v1.RuntimeService/Version
	May 10 19:08:20 test-preload-513090 crio[857]: time="2025-05-10 19:08:20.871764198Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=30546c9d-3dda-4a9e-b97b-05404361be5c name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:08:20 test-preload-513090 crio[857]: time="2025-05-10 19:08:20.872455538Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746904100872424700,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=30546c9d-3dda-4a9e-b97b-05404361be5c name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:08:20 test-preload-513090 crio[857]: time="2025-05-10 19:08:20.873356625Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9d2f13ad-ff16-4cdf-9731-0259e0482455 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:08:20 test-preload-513090 crio[857]: time="2025-05-10 19:08:20.873428667Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9d2f13ad-ff16-4cdf-9731-0259e0482455 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:08:20 test-preload-513090 crio[857]: time="2025-05-10 19:08:20.873612719Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c059303f61f66e55e03559c817aec206b4c9b8d507c328f61d514cf216007979,PodSandboxId:9ce66a150761e7e4fb0ae09be69929672a3f98f5288c6199dc7166c60198b5b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1746904093010988664,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-ql6bp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6f78565-ea70-4741-929d-1b4623a50d49,},Annotations:map[string]string{io.kubernetes.container.hash: 3e8e88a4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a3faf9fab53df43148278eb1f2bff40c99478a8994fe66806e834d9d99f46ec,PodSandboxId:324a9c8094ad13e013ff9263f859666f28c996efdab3ad5c78335b50b14505aa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1746904085919028088,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-twmh6,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d0e857fd-30e9-4499-be28-e3dff6a133ca,},Annotations:map[string]string{io.kubernetes.container.hash: 94697d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77ccb80eaf741e1ecfefeb1be52173e4de36ebc6932781d2f88b998d51c3307c,PodSandboxId:83650f44298feaff82d3e8869b43de2e21ead9a789bfe6f0934e7773823edff7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1746904085879726656,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93
838a50-0cc3-47dd-86d3-de7b6fb5f926,},Annotations:map[string]string{io.kubernetes.container.hash: 8b29603d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a46ae32672caee4201694495d93b824cbf08785252e7b25a985142845f0f9cd,PodSandboxId:07471b8c2c3ed594b8cb83ba3748c065e503ce5570fc355bcbc6570d24f7f71e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1746904079574312324,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-513090,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6bf445cf
dd981e52511eed182d8eaa0,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30460360d0b1d96e3238be610796c5833286759aeb9eb04b3b0f348e68487233,PodSandboxId:b94a3791452ac1e85bb9178eae8db29d00e7f8c9503f0f57df9d515daaffccc3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1746904079563402622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-513090,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: e8242fbe2c719f4aa01d0e44d00e2429,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cb937515d481f3b1d62a3efc7dbec7ab3e31f09f315e7bfff3a566957ee4d93,PodSandboxId:93223bf108aa0a58ac5d526abd4721d8e07511385699d112d9be56a5a79318d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1746904079466550978,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-513090,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09eb
b5ac1406f8e588c6ea0a0d176ba7,},Annotations:map[string]string{io.kubernetes.container.hash: 392504fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbc57300a118339c4652049811ecc597c9b8b5c457fa2329727ec6d99296dce7,PodSandboxId:5783d56a9a487f4593c166a124aa8176bc8f896f16848f75fae9719fce1442ea,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1746904079473978077,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-513090,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8b7dfc1da7ae7a99592809bf8895b9e,},Annotation
s:map[string]string{io.kubernetes.container.hash: a3826556,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9d2f13ad-ff16-4cdf-9731-0259e0482455 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:08:20 test-preload-513090 crio[857]: time="2025-05-10 19:08:20.918210229Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d6130867-70f1-4ec9-8d73-70c2a52334a2 name=/runtime.v1.RuntimeService/Version
	May 10 19:08:20 test-preload-513090 crio[857]: time="2025-05-10 19:08:20.918281228Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d6130867-70f1-4ec9-8d73-70c2a52334a2 name=/runtime.v1.RuntimeService/Version
	May 10 19:08:20 test-preload-513090 crio[857]: time="2025-05-10 19:08:20.920025210Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=be193127-0f8c-4f32-abda-18d71fadf9c1 name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:08:20 test-preload-513090 crio[857]: time="2025-05-10 19:08:20.920679511Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746904100920651774,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=be193127-0f8c-4f32-abda-18d71fadf9c1 name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:08:20 test-preload-513090 crio[857]: time="2025-05-10 19:08:20.921465049Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7e144cb6-7e93-4343-93c6-7bb99979e6f7 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:08:20 test-preload-513090 crio[857]: time="2025-05-10 19:08:20.921538721Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7e144cb6-7e93-4343-93c6-7bb99979e6f7 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:08:20 test-preload-513090 crio[857]: time="2025-05-10 19:08:20.921697318Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c059303f61f66e55e03559c817aec206b4c9b8d507c328f61d514cf216007979,PodSandboxId:9ce66a150761e7e4fb0ae09be69929672a3f98f5288c6199dc7166c60198b5b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1746904093010988664,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-ql6bp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6f78565-ea70-4741-929d-1b4623a50d49,},Annotations:map[string]string{io.kubernetes.container.hash: 3e8e88a4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a3faf9fab53df43148278eb1f2bff40c99478a8994fe66806e834d9d99f46ec,PodSandboxId:324a9c8094ad13e013ff9263f859666f28c996efdab3ad5c78335b50b14505aa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1746904085919028088,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-twmh6,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d0e857fd-30e9-4499-be28-e3dff6a133ca,},Annotations:map[string]string{io.kubernetes.container.hash: 94697d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77ccb80eaf741e1ecfefeb1be52173e4de36ebc6932781d2f88b998d51c3307c,PodSandboxId:83650f44298feaff82d3e8869b43de2e21ead9a789bfe6f0934e7773823edff7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1746904085879726656,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93
838a50-0cc3-47dd-86d3-de7b6fb5f926,},Annotations:map[string]string{io.kubernetes.container.hash: 8b29603d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a46ae32672caee4201694495d93b824cbf08785252e7b25a985142845f0f9cd,PodSandboxId:07471b8c2c3ed594b8cb83ba3748c065e503ce5570fc355bcbc6570d24f7f71e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1746904079574312324,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-513090,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6bf445cf
dd981e52511eed182d8eaa0,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30460360d0b1d96e3238be610796c5833286759aeb9eb04b3b0f348e68487233,PodSandboxId:b94a3791452ac1e85bb9178eae8db29d00e7f8c9503f0f57df9d515daaffccc3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1746904079563402622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-513090,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: e8242fbe2c719f4aa01d0e44d00e2429,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cb937515d481f3b1d62a3efc7dbec7ab3e31f09f315e7bfff3a566957ee4d93,PodSandboxId:93223bf108aa0a58ac5d526abd4721d8e07511385699d112d9be56a5a79318d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1746904079466550978,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-513090,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09eb
b5ac1406f8e588c6ea0a0d176ba7,},Annotations:map[string]string{io.kubernetes.container.hash: 392504fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbc57300a118339c4652049811ecc597c9b8b5c457fa2329727ec6d99296dce7,PodSandboxId:5783d56a9a487f4593c166a124aa8176bc8f896f16848f75fae9719fce1442ea,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1746904079473978077,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-513090,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8b7dfc1da7ae7a99592809bf8895b9e,},Annotation
s:map[string]string{io.kubernetes.container.hash: a3826556,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7e144cb6-7e93-4343-93c6-7bb99979e6f7 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:08:20 test-preload-513090 crio[857]: time="2025-05-10 19:08:20.948961450Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=2887a96d-6d24-4678-b99a-7a3c5bc625b0 name=/runtime.v1.RuntimeService/ListPodSandbox
	May 10 19:08:20 test-preload-513090 crio[857]: time="2025-05-10 19:08:20.949193674Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:9ce66a150761e7e4fb0ae09be69929672a3f98f5288c6199dc7166c60198b5b1,Metadata:&PodSandboxMetadata{Name:coredns-6d4b75cb6d-ql6bp,Uid:b6f78565-ea70-4741-929d-1b4623a50d49,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1746904092758663065,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6d4b75cb6d-ql6bp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6f78565-ea70-4741-929d-1b4623a50d49,k8s-app: kube-dns,pod-template-hash: 6d4b75cb6d,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-05-10T19:08:04.732780393Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:83650f44298feaff82d3e8869b43de2e21ead9a789bfe6f0934e7773823edff7,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:93838a50-0cc3-47dd-86d3-de7b6fb5f926,Namespace:kube-syste
m,Attempt:0,},State:SANDBOX_READY,CreatedAt:1746904085663878123,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93838a50-0cc3-47dd-86d3-de7b6fb5f926,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath
\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-05-10T19:08:04.732779380Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:324a9c8094ad13e013ff9263f859666f28c996efdab3ad5c78335b50b14505aa,Metadata:&PodSandboxMetadata{Name:kube-proxy-twmh6,Uid:d0e857fd-30e9-4499-be28-e3dff6a133ca,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1746904085646303280,Labels:map[string]string{controller-revision-hash: 6fd4744df8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-twmh6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0e857fd-30e9-4499-be28-e3dff6a133ca,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-05-10T19:08:04.732777256Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:07471b8c2c3ed594b8cb83ba3748c065e503ce5570fc355bcbc6570d24f7f71e,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-513090,Uid:f6bf445
cfdd981e52511eed182d8eaa0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1746904079293031924,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-513090,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6bf445cfdd981e52511eed182d8eaa0,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f6bf445cfdd981e52511eed182d8eaa0,kubernetes.io/config.seen: 2025-05-10T19:07:58.726142313Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b94a3791452ac1e85bb9178eae8db29d00e7f8c9503f0f57df9d515daaffccc3,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-513090,Uid:e8242fbe2c719f4aa01d0e44d00e2429,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1746904079290475642,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-513090,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8242fbe2c719f4aa01d0e44d00e2429,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e8242fbe2c719f4aa01d0e44d00e2429,kubernetes.io/config.seen: 2025-05-10T19:07:58.726176255Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5783d56a9a487f4593c166a124aa8176bc8f896f16848f75fae9719fce1442ea,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-513090,Uid:b8b7dfc1da7ae7a99592809bf8895b9e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1746904079274207468,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-513090,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8b7dfc1da7ae7a99592809bf8895b9e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.59:2379,kubernetes.io/config.hash: b8b7dfc1da7ae7a99592809bf8895b9e,kubernetes.io/config.seen: 2025-05-10T19:
07:58.726502786Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:93223bf108aa0a58ac5d526abd4721d8e07511385699d112d9be56a5a79318d6,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-513090,Uid:09ebb5ac1406f8e588c6ea0a0d176ba7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1746904079265942504,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-513090,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09ebb5ac1406f8e588c6ea0a0d176ba7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.59:8443,kubernetes.io/config.hash: 09ebb5ac1406f8e588c6ea0a0d176ba7,kubernetes.io/config.seen: 2025-05-10T19:07:58.726173953Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=2887a96d-6d24-4678-b99a-7a3c5bc625b0 name=/runtime.v1.RuntimeService/ListPodSandbox
	May 10 19:08:20 test-preload-513090 crio[857]: time="2025-05-10 19:08:20.949672421Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9c27ecd7-2fb8-4cd6-9ff8-7a23496aae10 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:08:20 test-preload-513090 crio[857]: time="2025-05-10 19:08:20.949787666Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9c27ecd7-2fb8-4cd6-9ff8-7a23496aae10 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:08:20 test-preload-513090 crio[857]: time="2025-05-10 19:08:20.950014180Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c059303f61f66e55e03559c817aec206b4c9b8d507c328f61d514cf216007979,PodSandboxId:9ce66a150761e7e4fb0ae09be69929672a3f98f5288c6199dc7166c60198b5b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1746904093010988664,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-ql6bp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6f78565-ea70-4741-929d-1b4623a50d49,},Annotations:map[string]string{io.kubernetes.container.hash: 3e8e88a4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a3faf9fab53df43148278eb1f2bff40c99478a8994fe66806e834d9d99f46ec,PodSandboxId:324a9c8094ad13e013ff9263f859666f28c996efdab3ad5c78335b50b14505aa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1746904085919028088,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-twmh6,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d0e857fd-30e9-4499-be28-e3dff6a133ca,},Annotations:map[string]string{io.kubernetes.container.hash: 94697d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77ccb80eaf741e1ecfefeb1be52173e4de36ebc6932781d2f88b998d51c3307c,PodSandboxId:83650f44298feaff82d3e8869b43de2e21ead9a789bfe6f0934e7773823edff7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1746904085879726656,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93
838a50-0cc3-47dd-86d3-de7b6fb5f926,},Annotations:map[string]string{io.kubernetes.container.hash: 8b29603d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a46ae32672caee4201694495d93b824cbf08785252e7b25a985142845f0f9cd,PodSandboxId:07471b8c2c3ed594b8cb83ba3748c065e503ce5570fc355bcbc6570d24f7f71e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1746904079574312324,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-513090,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6bf445cf
dd981e52511eed182d8eaa0,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30460360d0b1d96e3238be610796c5833286759aeb9eb04b3b0f348e68487233,PodSandboxId:b94a3791452ac1e85bb9178eae8db29d00e7f8c9503f0f57df9d515daaffccc3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1746904079563402622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-513090,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: e8242fbe2c719f4aa01d0e44d00e2429,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cb937515d481f3b1d62a3efc7dbec7ab3e31f09f315e7bfff3a566957ee4d93,PodSandboxId:93223bf108aa0a58ac5d526abd4721d8e07511385699d112d9be56a5a79318d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1746904079466550978,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-513090,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09eb
b5ac1406f8e588c6ea0a0d176ba7,},Annotations:map[string]string{io.kubernetes.container.hash: 392504fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbc57300a118339c4652049811ecc597c9b8b5c457fa2329727ec6d99296dce7,PodSandboxId:5783d56a9a487f4593c166a124aa8176bc8f896f16848f75fae9719fce1442ea,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1746904079473978077,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-513090,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8b7dfc1da7ae7a99592809bf8895b9e,},Annotation
s:map[string]string{io.kubernetes.container.hash: a3826556,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9c27ecd7-2fb8-4cd6-9ff8-7a23496aae10 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:08:20 test-preload-513090 crio[857]: time="2025-05-10 19:08:20.963280512Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=301769fa-024b-437f-89c6-d247b5a8adc7 name=/runtime.v1.RuntimeService/Version
	May 10 19:08:20 test-preload-513090 crio[857]: time="2025-05-10 19:08:20.963375346Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=301769fa-024b-437f-89c6-d247b5a8adc7 name=/runtime.v1.RuntimeService/Version
	May 10 19:08:20 test-preload-513090 crio[857]: time="2025-05-10 19:08:20.965185935Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=20b47646-e62f-4ee4-a1ec-9d07eda45ecd name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:08:20 test-preload-513090 crio[857]: time="2025-05-10 19:08:20.965664109Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746904100965639070,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=20b47646-e62f-4ee4-a1ec-9d07eda45ecd name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:08:20 test-preload-513090 crio[857]: time="2025-05-10 19:08:20.966435042Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6ae4e371-0dca-4af7-8f16-5b188e2c3456 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:08:20 test-preload-513090 crio[857]: time="2025-05-10 19:08:20.966483906Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6ae4e371-0dca-4af7-8f16-5b188e2c3456 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:08:20 test-preload-513090 crio[857]: time="2025-05-10 19:08:20.966642965Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c059303f61f66e55e03559c817aec206b4c9b8d507c328f61d514cf216007979,PodSandboxId:9ce66a150761e7e4fb0ae09be69929672a3f98f5288c6199dc7166c60198b5b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1746904093010988664,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-ql6bp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6f78565-ea70-4741-929d-1b4623a50d49,},Annotations:map[string]string{io.kubernetes.container.hash: 3e8e88a4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a3faf9fab53df43148278eb1f2bff40c99478a8994fe66806e834d9d99f46ec,PodSandboxId:324a9c8094ad13e013ff9263f859666f28c996efdab3ad5c78335b50b14505aa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1746904085919028088,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-twmh6,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d0e857fd-30e9-4499-be28-e3dff6a133ca,},Annotations:map[string]string{io.kubernetes.container.hash: 94697d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77ccb80eaf741e1ecfefeb1be52173e4de36ebc6932781d2f88b998d51c3307c,PodSandboxId:83650f44298feaff82d3e8869b43de2e21ead9a789bfe6f0934e7773823edff7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1746904085879726656,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93
838a50-0cc3-47dd-86d3-de7b6fb5f926,},Annotations:map[string]string{io.kubernetes.container.hash: 8b29603d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a46ae32672caee4201694495d93b824cbf08785252e7b25a985142845f0f9cd,PodSandboxId:07471b8c2c3ed594b8cb83ba3748c065e503ce5570fc355bcbc6570d24f7f71e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1746904079574312324,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-513090,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6bf445cf
dd981e52511eed182d8eaa0,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30460360d0b1d96e3238be610796c5833286759aeb9eb04b3b0f348e68487233,PodSandboxId:b94a3791452ac1e85bb9178eae8db29d00e7f8c9503f0f57df9d515daaffccc3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1746904079563402622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-513090,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: e8242fbe2c719f4aa01d0e44d00e2429,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cb937515d481f3b1d62a3efc7dbec7ab3e31f09f315e7bfff3a566957ee4d93,PodSandboxId:93223bf108aa0a58ac5d526abd4721d8e07511385699d112d9be56a5a79318d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1746904079466550978,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-513090,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09eb
b5ac1406f8e588c6ea0a0d176ba7,},Annotations:map[string]string{io.kubernetes.container.hash: 392504fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbc57300a118339c4652049811ecc597c9b8b5c457fa2329727ec6d99296dce7,PodSandboxId:5783d56a9a487f4593c166a124aa8176bc8f896f16848f75fae9719fce1442ea,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1746904079473978077,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-513090,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8b7dfc1da7ae7a99592809bf8895b9e,},Annotation
s:map[string]string{io.kubernetes.container.hash: a3826556,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6ae4e371-0dca-4af7-8f16-5b188e2c3456 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c059303f61f66       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   8 seconds ago       Running             coredns                   1                   9ce66a150761e       coredns-6d4b75cb6d-ql6bp
	4a3faf9fab53d       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   15 seconds ago      Running             kube-proxy                1                   324a9c8094ad1       kube-proxy-twmh6
	77ccb80eaf741       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Running             storage-provisioner       1                   83650f44298fe       storage-provisioner
	7a46ae32672ca       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   21 seconds ago      Running             kube-scheduler            1                   07471b8c2c3ed       kube-scheduler-test-preload-513090
	30460360d0b1d       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   21 seconds ago      Running             kube-controller-manager   1                   b94a3791452ac       kube-controller-manager-test-preload-513090
	fbc57300a1183       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago      Running             etcd                      1                   5783d56a9a487       etcd-test-preload-513090
	6cb937515d481       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   21 seconds ago      Running             kube-apiserver            1                   93223bf108aa0       kube-apiserver-test-preload-513090
	
	
	==> coredns [c059303f61f66e55e03559c817aec206b4c9b8d507c328f61d514cf216007979] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:33135 - 29839 "HINFO IN 7576136029154930693.3584015300974644544. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.04503607s
	
	
	==> describe nodes <==
	Name:               test-preload-513090
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-513090
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e96c83983357cd8557f3cdfe077a25cc73d485a4
	                    minikube.k8s.io/name=test-preload-513090
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_05_10T19_04_47_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 May 2025 19:04:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-513090
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 May 2025 19:08:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 May 2025 19:08:14 +0000   Sat, 10 May 2025 19:04:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 May 2025 19:08:14 +0000   Sat, 10 May 2025 19:04:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 May 2025 19:08:14 +0000   Sat, 10 May 2025 19:04:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 May 2025 19:08:14 +0000   Sat, 10 May 2025 19:08:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.59
	  Hostname:    test-preload-513090
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164144Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164144Ki
	  pods:               110
	System Info:
	  Machine ID:                 1f053e636ea44f61bf67ab647d706e5e
	  System UUID:                1f053e63-6ea4-4f61-bf67-ab647d706e5e
	  Boot ID:                    62f309ba-678d-4174-a492-9d471d3b3a78
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2024.11.2
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-ql6bp                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     3m22s
	  kube-system                 etcd-test-preload-513090                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         3m35s
	  kube-system                 kube-apiserver-test-preload-513090             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m34s
	  kube-system                 kube-controller-manager-test-preload-513090    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m36s
	  kube-system                 kube-proxy-twmh6                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m22s
	  kube-system                 kube-scheduler-test-preload-513090             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m34s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 14s                    kube-proxy       
	  Normal  Starting                 3m19s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m42s (x4 over 3m42s)  kubelet          Node test-preload-513090 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m42s (x4 over 3m42s)  kubelet          Node test-preload-513090 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m42s (x4 over 3m42s)  kubelet          Node test-preload-513090 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m34s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m34s                  kubelet          Node test-preload-513090 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m34s                  kubelet          Node test-preload-513090 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m34s                  kubelet          Node test-preload-513090 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m24s                  kubelet          Node test-preload-513090 status is now: NodeReady
	  Normal  RegisteredNode           3m23s                  node-controller  Node test-preload-513090 event: Registered Node test-preload-513090 in Controller
	  Normal  Starting                 23s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x9 over 23s)      kubelet          Node test-preload-513090 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x7 over 23s)      kubelet          Node test-preload-513090 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)      kubelet          Node test-preload-513090 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5s                     node-controller  Node test-preload-513090 event: Registered Node test-preload-513090 in Controller
	
	
	==> dmesg <==
	[May10 19:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.000002] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.001465] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.005784] (rpcbind)[143]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.026172] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000003] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.083502] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.100098] kauditd_printk_skb: 46 callbacks suppressed
	[May10 19:08] kauditd_printk_skb: 105 callbacks suppressed
	[  +0.000071] kauditd_printk_skb: 29 callbacks suppressed
	
	
	==> etcd [fbc57300a118339c4652049811ecc597c9b8b5c457fa2329727ec6d99296dce7] <==
	{"level":"info","ts":"2025-05-10T19:07:59.879Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"8376b9efef0ac538","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-05-10T19:07:59.884Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-05-10T19:07:59.884Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-05-10T19:07:59.885Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-05-10T19:07:59.884Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8376b9efef0ac538","initial-advertise-peer-urls":["https://192.168.39.59:2380"],"listen-peer-urls":["https://192.168.39.59:2380"],"advertise-client-urls":["https://192.168.39.59:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.59:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-05-10T19:07:59.885Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.59:2380"}
	{"level":"info","ts":"2025-05-10T19:07:59.885Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.59:2380"}
	{"level":"info","ts":"2025-05-10T19:07:59.885Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8376b9efef0ac538 switched to configuration voters=(9472963306379199800)"}
	{"level":"info","ts":"2025-05-10T19:07:59.886Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ec2082d3763590b8","local-member-id":"8376b9efef0ac538","added-peer-id":"8376b9efef0ac538","added-peer-peer-urls":["https://192.168.39.59:2380"]}
	{"level":"info","ts":"2025-05-10T19:07:59.886Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ec2082d3763590b8","local-member-id":"8376b9efef0ac538","cluster-version":"3.5"}
	{"level":"info","ts":"2025-05-10T19:07:59.886Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-05-10T19:08:00.912Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8376b9efef0ac538 is starting a new election at term 2"}
	{"level":"info","ts":"2025-05-10T19:08:00.913Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8376b9efef0ac538 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-05-10T19:08:00.913Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8376b9efef0ac538 received MsgPreVoteResp from 8376b9efef0ac538 at term 2"}
	{"level":"info","ts":"2025-05-10T19:08:00.913Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8376b9efef0ac538 became candidate at term 3"}
	{"level":"info","ts":"2025-05-10T19:08:00.913Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8376b9efef0ac538 received MsgVoteResp from 8376b9efef0ac538 at term 3"}
	{"level":"info","ts":"2025-05-10T19:08:00.913Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8376b9efef0ac538 became leader at term 3"}
	{"level":"info","ts":"2025-05-10T19:08:00.913Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8376b9efef0ac538 elected leader 8376b9efef0ac538 at term 3"}
	{"level":"info","ts":"2025-05-10T19:08:00.913Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8376b9efef0ac538","local-member-attributes":"{Name:test-preload-513090 ClientURLs:[https://192.168.39.59:2379]}","request-path":"/0/members/8376b9efef0ac538/attributes","cluster-id":"ec2082d3763590b8","publish-timeout":"7s"}
	{"level":"info","ts":"2025-05-10T19:08:00.915Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T19:08:00.920Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T19:08:00.923Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.59:2379"}
	{"level":"info","ts":"2025-05-10T19:08:00.931Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-05-10T19:08:00.932Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-05-10T19:08:00.940Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:08:21 up 0 min,  0 user,  load average: 1.06, 0.30, 0.10
	Linux test-preload-513090 5.10.207 #1 SMP Fri May 9 03:49:24 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2024.11.2"
	
	
	==> kube-apiserver [6cb937515d481f3b1d62a3efc7dbec7ab3e31f09f315e7bfff3a566957ee4d93] <==
	I0510 19:08:04.031572       1 controller.go:85] Starting OpenAPI controller
	I0510 19:08:04.031587       1 controller.go:85] Starting OpenAPI V3 controller
	I0510 19:08:04.031624       1 naming_controller.go:291] Starting NamingConditionController
	I0510 19:08:04.031646       1 establishing_controller.go:76] Starting EstablishingController
	I0510 19:08:04.031667       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0510 19:08:04.031681       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0510 19:08:04.031695       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0510 19:08:04.101905       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0510 19:08:04.104543       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0510 19:08:04.113765       1 cache.go:39] Caches are synced for autoregister controller
	I0510 19:08:04.156702       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0510 19:08:04.164268       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0510 19:08:04.164336       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0510 19:08:04.178336       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0510 19:08:04.189017       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0510 19:08:04.629258       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0510 19:08:05.007974       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0510 19:08:05.450463       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0510 19:08:05.465783       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0510 19:08:05.516610       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0510 19:08:05.549121       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0510 19:08:05.556524       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0510 19:08:06.668452       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0510 19:08:16.766173       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0510 19:08:16.812404       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [30460360d0b1d96e3238be610796c5833286759aeb9eb04b3b0f348e68487233] <==
	I0510 19:08:16.759567       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0510 19:08:16.761946       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0510 19:08:16.764948       1 shared_informer.go:262] Caches are synced for disruption
	I0510 19:08:16.765374       1 disruption.go:371] Sending events to api server.
	I0510 19:08:16.765315       1 shared_informer.go:262] Caches are synced for ephemeral
	I0510 19:08:16.770749       1 shared_informer.go:262] Caches are synced for taint
	I0510 19:08:16.771035       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0510 19:08:16.771386       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	I0510 19:08:16.771735       1 event.go:294] "Event occurred" object="test-preload-513090" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-513090 event: Registered Node test-preload-513090 in Controller"
	W0510 19:08:16.772018       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-513090. Assuming now as a timestamp.
	I0510 19:08:16.772113       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0510 19:08:16.775462       1 shared_informer.go:262] Caches are synced for attach detach
	I0510 19:08:16.784683       1 shared_informer.go:262] Caches are synced for stateful set
	I0510 19:08:16.791345       1 shared_informer.go:262] Caches are synced for HPA
	I0510 19:08:16.793723       1 shared_informer.go:262] Caches are synced for daemon sets
	I0510 19:08:16.793933       1 shared_informer.go:262] Caches are synced for PVC protection
	I0510 19:08:16.795118       1 shared_informer.go:262] Caches are synced for job
	I0510 19:08:16.802097       1 shared_informer.go:262] Caches are synced for endpoint
	I0510 19:08:16.805335       1 shared_informer.go:262] Caches are synced for GC
	I0510 19:08:16.818969       1 shared_informer.go:262] Caches are synced for resource quota
	I0510 19:08:16.852378       1 shared_informer.go:262] Caches are synced for namespace
	I0510 19:08:16.855720       1 shared_informer.go:262] Caches are synced for service account
	I0510 19:08:17.290229       1 shared_informer.go:262] Caches are synced for garbage collector
	I0510 19:08:17.315134       1 shared_informer.go:262] Caches are synced for garbage collector
	I0510 19:08:17.315172       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [4a3faf9fab53df43148278eb1f2bff40c99478a8994fe66806e834d9d99f46ec] <==
	I0510 19:08:06.591352       1 node.go:163] Successfully retrieved node IP: 192.168.39.59
	I0510 19:08:06.591574       1 server_others.go:138] "Detected node IP" address="192.168.39.59"
	I0510 19:08:06.591682       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0510 19:08:06.653238       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0510 19:08:06.653280       1 server_others.go:206] "Using iptables Proxier"
	I0510 19:08:06.653355       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0510 19:08:06.654925       1 server.go:661] "Version info" version="v1.24.4"
	I0510 19:08:06.654959       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 19:08:06.656535       1 config.go:317] "Starting service config controller"
	I0510 19:08:06.656740       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0510 19:08:06.656797       1 config.go:226] "Starting endpoint slice config controller"
	I0510 19:08:06.656892       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0510 19:08:06.658112       1 config.go:444] "Starting node config controller"
	I0510 19:08:06.658141       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0510 19:08:06.757283       1 shared_informer.go:262] Caches are synced for service config
	I0510 19:08:06.757472       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0510 19:08:06.758534       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [7a46ae32672caee4201694495d93b824cbf08785252e7b25a985142845f0f9cd] <==
	I0510 19:08:01.066986       1 serving.go:348] Generated self-signed cert in-memory
	W0510 19:08:04.055019       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0510 19:08:04.055110       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0510 19:08:04.055123       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0510 19:08:04.055129       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0510 19:08:04.122119       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0510 19:08:04.122188       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 19:08:04.128394       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0510 19:08:04.130940       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 19:08:04.130983       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0510 19:08:04.131024       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0510 19:08:04.231247       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 10 19:08:04 test-preload-513090 kubelet[1474]: I0510 19:08:04.172380    1474 setters.go:532] "Node became not ready" node="test-preload-513090" condition={Type:Ready Status:False LastHeartbeatTime:2025-05-10 19:08:04.172320554 +0000 UTC m=+5.603235927 LastTransitionTime:2025-05-10 19:08:04.172320554 +0000 UTC m=+5.603235927 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	May 10 19:08:04 test-preload-513090 kubelet[1474]: I0510 19:08:04.188854    1474 setters.go:532] "Node became not ready" node="test-preload-513090" condition={Type:Ready Status:False LastHeartbeatTime:2025-05-10 19:08:04.18874016 +0000 UTC m=+5.619655532 LastTransitionTime:2025-05-10 19:08:04.18874016 +0000 UTC m=+5.619655532 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	May 10 19:08:04 test-preload-513090 kubelet[1474]: I0510 19:08:04.725007    1474 apiserver.go:52] "Watching apiserver"
	May 10 19:08:04 test-preload-513090 kubelet[1474]: I0510 19:08:04.733053    1474 topology_manager.go:200] "Topology Admit Handler"
	May 10 19:08:04 test-preload-513090 kubelet[1474]: I0510 19:08:04.733172    1474 topology_manager.go:200] "Topology Admit Handler"
	May 10 19:08:04 test-preload-513090 kubelet[1474]: I0510 19:08:04.733208    1474 topology_manager.go:200] "Topology Admit Handler"
	May 10 19:08:04 test-preload-513090 kubelet[1474]: E0510 19:08:04.734252    1474 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-ql6bp" podUID=b6f78565-ea70-4741-929d-1b4623a50d49
	May 10 19:08:04 test-preload-513090 kubelet[1474]: I0510 19:08:04.799483    1474 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/93838a50-0cc3-47dd-86d3-de7b6fb5f926-tmp\") pod \"storage-provisioner\" (UID: \"93838a50-0cc3-47dd-86d3-de7b6fb5f926\") " pod="kube-system/storage-provisioner"
	May 10 19:08:04 test-preload-513090 kubelet[1474]: I0510 19:08:04.799560    1474 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6f78565-ea70-4741-929d-1b4623a50d49-config-volume\") pod \"coredns-6d4b75cb6d-ql6bp\" (UID: \"b6f78565-ea70-4741-929d-1b4623a50d49\") " pod="kube-system/coredns-6d4b75cb6d-ql6bp"
	May 10 19:08:04 test-preload-513090 kubelet[1474]: I0510 19:08:04.799590    1474 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2xt5\" (UniqueName: \"kubernetes.io/projected/b6f78565-ea70-4741-929d-1b4623a50d49-kube-api-access-j2xt5\") pod \"coredns-6d4b75cb6d-ql6bp\" (UID: \"b6f78565-ea70-4741-929d-1b4623a50d49\") " pod="kube-system/coredns-6d4b75cb6d-ql6bp"
	May 10 19:08:04 test-preload-513090 kubelet[1474]: I0510 19:08:04.799610    1474 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d0e857fd-30e9-4499-be28-e3dff6a133ca-kube-proxy\") pod \"kube-proxy-twmh6\" (UID: \"d0e857fd-30e9-4499-be28-e3dff6a133ca\") " pod="kube-system/kube-proxy-twmh6"
	May 10 19:08:04 test-preload-513090 kubelet[1474]: I0510 19:08:04.799645    1474 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54nsh\" (UniqueName: \"kubernetes.io/projected/93838a50-0cc3-47dd-86d3-de7b6fb5f926-kube-api-access-54nsh\") pod \"storage-provisioner\" (UID: \"93838a50-0cc3-47dd-86d3-de7b6fb5f926\") " pod="kube-system/storage-provisioner"
	May 10 19:08:04 test-preload-513090 kubelet[1474]: I0510 19:08:04.799663    1474 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d0e857fd-30e9-4499-be28-e3dff6a133ca-xtables-lock\") pod \"kube-proxy-twmh6\" (UID: \"d0e857fd-30e9-4499-be28-e3dff6a133ca\") " pod="kube-system/kube-proxy-twmh6"
	May 10 19:08:04 test-preload-513090 kubelet[1474]: I0510 19:08:04.799679    1474 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d0e857fd-30e9-4499-be28-e3dff6a133ca-lib-modules\") pod \"kube-proxy-twmh6\" (UID: \"d0e857fd-30e9-4499-be28-e3dff6a133ca\") " pod="kube-system/kube-proxy-twmh6"
	May 10 19:08:04 test-preload-513090 kubelet[1474]: I0510 19:08:04.799698    1474 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjpvx\" (UniqueName: \"kubernetes.io/projected/d0e857fd-30e9-4499-be28-e3dff6a133ca-kube-api-access-zjpvx\") pod \"kube-proxy-twmh6\" (UID: \"d0e857fd-30e9-4499-be28-e3dff6a133ca\") " pod="kube-system/kube-proxy-twmh6"
	May 10 19:08:04 test-preload-513090 kubelet[1474]: I0510 19:08:04.799711    1474 reconciler.go:159] "Reconciler: start to sync state"
	May 10 19:08:04 test-preload-513090 kubelet[1474]: E0510 19:08:04.904398    1474 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	May 10 19:08:04 test-preload-513090 kubelet[1474]: E0510 19:08:04.904512    1474 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/b6f78565-ea70-4741-929d-1b4623a50d49-config-volume podName:b6f78565-ea70-4741-929d-1b4623a50d49 nodeName:}" failed. No retries permitted until 2025-05-10 19:08:05.40448176 +0000 UTC m=+6.835397145 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b6f78565-ea70-4741-929d-1b4623a50d49-config-volume") pod "coredns-6d4b75cb6d-ql6bp" (UID: "b6f78565-ea70-4741-929d-1b4623a50d49") : object "kube-system"/"coredns" not registered
	May 10 19:08:05 test-preload-513090 kubelet[1474]: E0510 19:08:05.407634    1474 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	May 10 19:08:05 test-preload-513090 kubelet[1474]: E0510 19:08:05.407700    1474 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/b6f78565-ea70-4741-929d-1b4623a50d49-config-volume podName:b6f78565-ea70-4741-929d-1b4623a50d49 nodeName:}" failed. No retries permitted until 2025-05-10 19:08:06.40768625 +0000 UTC m=+7.838601635 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b6f78565-ea70-4741-929d-1b4623a50d49-config-volume") pod "coredns-6d4b75cb6d-ql6bp" (UID: "b6f78565-ea70-4741-929d-1b4623a50d49") : object "kube-system"/"coredns" not registered
	May 10 19:08:06 test-preload-513090 kubelet[1474]: E0510 19:08:06.414710    1474 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	May 10 19:08:06 test-preload-513090 kubelet[1474]: E0510 19:08:06.414806    1474 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/b6f78565-ea70-4741-929d-1b4623a50d49-config-volume podName:b6f78565-ea70-4741-929d-1b4623a50d49 nodeName:}" failed. No retries permitted until 2025-05-10 19:08:08.414790618 +0000 UTC m=+9.845706002 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b6f78565-ea70-4741-929d-1b4623a50d49-config-volume") pod "coredns-6d4b75cb6d-ql6bp" (UID: "b6f78565-ea70-4741-929d-1b4623a50d49") : object "kube-system"/"coredns" not registered
	May 10 19:08:06 test-preload-513090 kubelet[1474]: E0510 19:08:06.849810    1474 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-ql6bp" podUID=b6f78565-ea70-4741-929d-1b4623a50d49
	May 10 19:08:08 test-preload-513090 kubelet[1474]: E0510 19:08:08.442691    1474 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	May 10 19:08:08 test-preload-513090 kubelet[1474]: E0510 19:08:08.443055    1474 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/b6f78565-ea70-4741-929d-1b4623a50d49-config-volume podName:b6f78565-ea70-4741-929d-1b4623a50d49 nodeName:}" failed. No retries permitted until 2025-05-10 19:08:12.443027475 +0000 UTC m=+13.873942857 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b6f78565-ea70-4741-929d-1b4623a50d49-config-volume") pod "coredns-6d4b75cb6d-ql6bp" (UID: "b6f78565-ea70-4741-929d-1b4623a50d49") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [77ccb80eaf741e1ecfefeb1be52173e4de36ebc6932781d2f88b998d51c3307c] <==
	I0510 19:08:06.145489       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-513090 -n test-preload-513090
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-513090 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-513090" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-513090
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-513090: (1.208469137s)
--- FAIL: TestPreload (289.32s)

                                                
                                    
x
+
TestKubernetesUpgrade (459.94s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-517660 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-517660 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m38.915924236s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-517660] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20720
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20720-388787/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-388787/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-517660" primary control-plane node in "kubernetes-upgrade-517660" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0510 19:10:30.011843  435640 out.go:345] Setting OutFile to fd 1 ...
	I0510 19:10:30.012138  435640 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 19:10:30.012149  435640 out.go:358] Setting ErrFile to fd 2...
	I0510 19:10:30.012153  435640 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 19:10:30.012404  435640 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-388787/.minikube/bin
	I0510 19:10:30.013062  435640 out.go:352] Setting JSON to false
	I0510 19:10:30.014221  435640 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":31978,"bootTime":1746872252,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 19:10:30.014281  435640 start.go:140] virtualization: kvm guest
	I0510 19:10:30.017082  435640 out.go:177] * [kubernetes-upgrade-517660] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 19:10:30.018409  435640 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 19:10:30.018407  435640 notify.go:220] Checking for updates...
	I0510 19:10:30.019651  435640 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 19:10:30.020957  435640 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-388787/kubeconfig
	I0510 19:10:30.022237  435640 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-388787/.minikube
	I0510 19:10:30.023487  435640 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 19:10:30.024620  435640 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 19:10:30.026202  435640 config.go:182] Loaded profile config "NoKubernetes-065180": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 19:10:30.026323  435640 config.go:182] Loaded profile config "offline-crio-031624": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 19:10:30.026413  435640 config.go:182] Loaded profile config "running-upgrade-085041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0510 19:10:30.026546  435640 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 19:10:30.060502  435640 out.go:177] * Using the kvm2 driver based on user configuration
	I0510 19:10:30.061874  435640 start.go:304] selected driver: kvm2
	I0510 19:10:30.061893  435640 start.go:908] validating driver "kvm2" against <nil>
	I0510 19:10:30.061905  435640 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 19:10:30.062608  435640 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 19:10:30.062710  435640 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20720-388787/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0510 19:10:30.078775  435640 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0510 19:10:30.078842  435640 start_flags.go:311] no existing cluster config was found, will generate one from the flags 
	I0510 19:10:30.079090  435640 start_flags.go:957] Wait components to verify : map[apiserver:true system_pods:true]
	I0510 19:10:30.079115  435640 cni.go:84] Creating CNI manager for ""
	I0510 19:10:30.079169  435640 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0510 19:10:30.079182  435640 start_flags.go:320] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0510 19:10:30.079271  435640 start.go:347] cluster config:
	{Name:kubernetes-upgrade-517660 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-517660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 19:10:30.079381  435640 iso.go:125] acquiring lock: {Name:mk19640015999219180c6685480547adf0c02201 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 19:10:30.081374  435640 out.go:177] * Starting "kubernetes-upgrade-517660" primary control-plane node in "kubernetes-upgrade-517660" cluster
	I0510 19:10:30.082867  435640 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0510 19:10:30.082961  435640 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0510 19:10:30.082980  435640 cache.go:56] Caching tarball of preloaded images
	I0510 19:10:30.083091  435640 preload.go:172] Found /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0510 19:10:30.083104  435640 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0510 19:10:30.083222  435640 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kubernetes-upgrade-517660/config.json ...
	I0510 19:10:30.083268  435640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kubernetes-upgrade-517660/config.json: {Name:mkfd21dcaefacbeeb1b74ff5a5c7909c759eaff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:10:30.083443  435640 start.go:360] acquireMachinesLock for kubernetes-upgrade-517660: {Name:mk11499d7756d503a7a24339ad1a7f9ab9dc0fab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0510 19:11:33.592186  435640 start.go:364] duration metric: took 1m3.508697221s to acquireMachinesLock for "kubernetes-upgrade-517660"
	I0510 19:11:33.592274  435640 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-517660 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-517660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0510 19:11:33.592435  435640 start.go:125] createHost starting for "" (driver="kvm2")
	I0510 19:11:33.595219  435640 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0510 19:11:33.595478  435640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:11:33.595553  435640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:11:33.613473  435640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45787
	I0510 19:11:33.614040  435640 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:11:33.614689  435640 main.go:141] libmachine: Using API Version  1
	I0510 19:11:33.614731  435640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:11:33.615162  435640 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:11:33.615411  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetMachineName
	I0510 19:11:33.615582  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .DriverName
	I0510 19:11:33.615781  435640 start.go:159] libmachine.API.Create for "kubernetes-upgrade-517660" (driver="kvm2")
	I0510 19:11:33.615820  435640 client.go:168] LocalClient.Create starting
	I0510 19:11:33.615867  435640 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem
	I0510 19:11:33.615916  435640 main.go:141] libmachine: Decoding PEM data...
	I0510 19:11:33.615948  435640 main.go:141] libmachine: Parsing certificate...
	I0510 19:11:33.616082  435640 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem
	I0510 19:11:33.616124  435640 main.go:141] libmachine: Decoding PEM data...
	I0510 19:11:33.616145  435640 main.go:141] libmachine: Parsing certificate...
	I0510 19:11:33.616173  435640 main.go:141] libmachine: Running pre-create checks...
	I0510 19:11:33.616185  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .PreCreateCheck
	I0510 19:11:33.616534  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetConfigRaw
	I0510 19:11:33.617014  435640 main.go:141] libmachine: Creating machine...
	I0510 19:11:33.617033  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .Create
	I0510 19:11:33.617168  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) creating KVM machine...
	I0510 19:11:33.617186  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) creating network...
	I0510 19:11:33.618801  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | found existing default KVM network
	I0510 19:11:33.619955  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | I0510 19:11:33.619777  436344 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:83:6a:aa} reservation:<nil>}
	I0510 19:11:33.620904  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | I0510 19:11:33.620808  436344 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:59:e4:36} reservation:<nil>}
	I0510 19:11:33.622011  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | I0510 19:11:33.621918  436344 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:4f:52:38} reservation:<nil>}
	I0510 19:11:33.623339  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | I0510 19:11:33.623227  436344 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000291350}
	I0510 19:11:33.623392  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | created network xml: 
	I0510 19:11:33.623412  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | <network>
	I0510 19:11:33.623422  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG |   <name>mk-kubernetes-upgrade-517660</name>
	I0510 19:11:33.623430  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG |   <dns enable='no'/>
	I0510 19:11:33.623456  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG |   
	I0510 19:11:33.623466  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0510 19:11:33.623474  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG |     <dhcp>
	I0510 19:11:33.623482  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0510 19:11:33.623764  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG |     </dhcp>
	I0510 19:11:33.623786  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG |   </ip>
	I0510 19:11:33.623820  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG |   
	I0510 19:11:33.623857  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | </network>
	I0510 19:11:33.623943  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | 
	I0510 19:11:33.628968  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | trying to create private KVM network mk-kubernetes-upgrade-517660 192.168.72.0/24...
	I0510 19:11:33.724630  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | private KVM network mk-kubernetes-upgrade-517660 192.168.72.0/24 created
	I0510 19:11:33.724681  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) setting up store path in /home/jenkins/minikube-integration/20720-388787/.minikube/machines/kubernetes-upgrade-517660 ...
	I0510 19:11:33.724696  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | I0510 19:11:33.724506  436344 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20720-388787/.minikube
	I0510 19:11:33.724721  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) building disk image from file:///home/jenkins/minikube-integration/20720-388787/.minikube/cache/iso/amd64/minikube-v1.35.0-1746739450-20720-amd64.iso
	I0510 19:11:33.724732  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Downloading /home/jenkins/minikube-integration/20720-388787/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20720-388787/.minikube/cache/iso/amd64/minikube-v1.35.0-1746739450-20720-amd64.iso...
	I0510 19:11:34.031330  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | I0510 19:11:34.031159  436344 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/kubernetes-upgrade-517660/id_rsa...
	I0510 19:11:34.373706  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | I0510 19:11:34.373546  436344 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/kubernetes-upgrade-517660/kubernetes-upgrade-517660.rawdisk...
	I0510 19:11:34.373748  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | Writing magic tar header
	I0510 19:11:34.373765  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | Writing SSH key tar header
	I0510 19:11:34.373804  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | I0510 19:11:34.373744  436344 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20720-388787/.minikube/machines/kubernetes-upgrade-517660 ...
	I0510 19:11:34.373932  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/kubernetes-upgrade-517660
	I0510 19:11:34.373960  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20720-388787/.minikube/machines
	I0510 19:11:34.373975  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) setting executable bit set on /home/jenkins/minikube-integration/20720-388787/.minikube/machines/kubernetes-upgrade-517660 (perms=drwx------)
	I0510 19:11:34.373989  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20720-388787/.minikube
	I0510 19:11:34.374010  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20720-388787
	I0510 19:11:34.374023  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0510 19:11:34.374033  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) setting executable bit set on /home/jenkins/minikube-integration/20720-388787/.minikube/machines (perms=drwxr-xr-x)
	I0510 19:11:34.374055  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | checking permissions on dir: /home/jenkins
	I0510 19:11:34.374068  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | checking permissions on dir: /home
	I0510 19:11:34.374079  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) setting executable bit set on /home/jenkins/minikube-integration/20720-388787/.minikube (perms=drwxr-xr-x)
	I0510 19:11:34.374089  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | skipping /home - not owner
	I0510 19:11:34.374111  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) setting executable bit set on /home/jenkins/minikube-integration/20720-388787 (perms=drwxrwxr-x)
	I0510 19:11:34.374126  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0510 19:11:34.374141  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0510 19:11:34.374152  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) creating domain...
	I0510 19:11:34.375447  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) define libvirt domain using xml: 
	I0510 19:11:34.375497  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) <domain type='kvm'>
	I0510 19:11:34.375508  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)   <name>kubernetes-upgrade-517660</name>
	I0510 19:11:34.375521  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)   <memory unit='MiB'>2200</memory>
	I0510 19:11:34.375531  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)   <vcpu>2</vcpu>
	I0510 19:11:34.375549  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)   <features>
	I0510 19:11:34.375557  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)     <acpi/>
	I0510 19:11:34.375563  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)     <apic/>
	I0510 19:11:34.375571  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)     <pae/>
	I0510 19:11:34.375577  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)     
	I0510 19:11:34.375586  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)   </features>
	I0510 19:11:34.375593  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)   <cpu mode='host-passthrough'>
	I0510 19:11:34.375600  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)   
	I0510 19:11:34.375606  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)   </cpu>
	I0510 19:11:34.375614  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)   <os>
	I0510 19:11:34.375620  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)     <type>hvm</type>
	I0510 19:11:34.375629  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)     <boot dev='cdrom'/>
	I0510 19:11:34.375635  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)     <boot dev='hd'/>
	I0510 19:11:34.375643  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)     <bootmenu enable='no'/>
	I0510 19:11:34.375655  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)   </os>
	I0510 19:11:34.375664  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)   <devices>
	I0510 19:11:34.375671  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)     <disk type='file' device='cdrom'>
	I0510 19:11:34.375687  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)       <source file='/home/jenkins/minikube-integration/20720-388787/.minikube/machines/kubernetes-upgrade-517660/boot2docker.iso'/>
	I0510 19:11:34.375701  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)       <target dev='hdc' bus='scsi'/>
	I0510 19:11:34.375709  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)       <readonly/>
	I0510 19:11:34.375715  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)     </disk>
	I0510 19:11:34.375725  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)     <disk type='file' device='disk'>
	I0510 19:11:34.375743  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0510 19:11:34.375760  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)       <source file='/home/jenkins/minikube-integration/20720-388787/.minikube/machines/kubernetes-upgrade-517660/kubernetes-upgrade-517660.rawdisk'/>
	I0510 19:11:34.375768  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)       <target dev='hda' bus='virtio'/>
	I0510 19:11:34.375776  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)     </disk>
	I0510 19:11:34.375783  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)     <interface type='network'>
	I0510 19:11:34.375793  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)       <source network='mk-kubernetes-upgrade-517660'/>
	I0510 19:11:34.375808  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)       <model type='virtio'/>
	I0510 19:11:34.375814  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)     </interface>
	I0510 19:11:34.375824  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)     <interface type='network'>
	I0510 19:11:34.375831  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)       <source network='default'/>
	I0510 19:11:34.375839  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)       <model type='virtio'/>
	I0510 19:11:34.375846  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)     </interface>
	I0510 19:11:34.375854  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)     <serial type='pty'>
	I0510 19:11:34.375861  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)       <target port='0'/>
	I0510 19:11:34.375870  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)     </serial>
	I0510 19:11:34.375878  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)     <console type='pty'>
	I0510 19:11:34.375904  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)       <target type='serial' port='0'/>
	I0510 19:11:34.375924  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)     </console>
	I0510 19:11:34.375936  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)     <rng model='virtio'>
	I0510 19:11:34.375953  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)       <backend model='random'>/dev/random</backend>
	I0510 19:11:34.375966  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)     </rng>
	I0510 19:11:34.375977  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)     
	I0510 19:11:34.375986  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)     
	I0510 19:11:34.376001  435640 main.go:141] libmachine: (kubernetes-upgrade-517660)   </devices>
	I0510 19:11:34.376037  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) </domain>
	I0510 19:11:34.376072  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) 
	I0510 19:11:34.384003  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:2e:f7:6e in network default
	I0510 19:11:34.384691  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) starting domain...
	I0510 19:11:34.384713  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) ensuring networks are active...
	I0510 19:11:34.384967  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:11:34.385971  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Ensuring network default is active
	I0510 19:11:34.386352  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Ensuring network mk-kubernetes-upgrade-517660 is active
	I0510 19:11:34.387090  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) getting domain XML...
	I0510 19:11:34.387941  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) creating domain...
	I0510 19:11:35.760256  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) waiting for IP...
	I0510 19:11:35.762454  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:11:35.762996  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | unable to find current IP address of domain kubernetes-upgrade-517660 in network mk-kubernetes-upgrade-517660
	I0510 19:11:35.763062  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | I0510 19:11:35.762984  436344 retry.go:31] will retry after 294.587966ms: waiting for domain to come up
	I0510 19:11:36.059666  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:11:36.060238  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | unable to find current IP address of domain kubernetes-upgrade-517660 in network mk-kubernetes-upgrade-517660
	I0510 19:11:36.060306  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | I0510 19:11:36.060207  436344 retry.go:31] will retry after 387.339948ms: waiting for domain to come up
	I0510 19:11:36.449684  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:11:36.450140  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | unable to find current IP address of domain kubernetes-upgrade-517660 in network mk-kubernetes-upgrade-517660
	I0510 19:11:36.450165  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | I0510 19:11:36.450113  436344 retry.go:31] will retry after 300.268094ms: waiting for domain to come up
	I0510 19:11:36.751945  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:11:36.752516  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | unable to find current IP address of domain kubernetes-upgrade-517660 in network mk-kubernetes-upgrade-517660
	I0510 19:11:36.752582  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | I0510 19:11:36.752504  436344 retry.go:31] will retry after 497.30185ms: waiting for domain to come up
	I0510 19:11:37.251458  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:11:37.252064  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | unable to find current IP address of domain kubernetes-upgrade-517660 in network mk-kubernetes-upgrade-517660
	I0510 19:11:37.252099  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | I0510 19:11:37.252037  436344 retry.go:31] will retry after 486.019629ms: waiting for domain to come up
	I0510 19:11:37.740015  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:11:37.740503  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | unable to find current IP address of domain kubernetes-upgrade-517660 in network mk-kubernetes-upgrade-517660
	I0510 19:11:37.740528  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | I0510 19:11:37.740469  436344 retry.go:31] will retry after 776.196921ms: waiting for domain to come up
	I0510 19:11:38.518445  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:11:38.519169  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | unable to find current IP address of domain kubernetes-upgrade-517660 in network mk-kubernetes-upgrade-517660
	I0510 19:11:38.519200  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | I0510 19:11:38.519133  436344 retry.go:31] will retry after 1.15573032s: waiting for domain to come up
	I0510 19:11:39.677053  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:11:39.677606  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | unable to find current IP address of domain kubernetes-upgrade-517660 in network mk-kubernetes-upgrade-517660
	I0510 19:11:39.677638  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | I0510 19:11:39.677585  436344 retry.go:31] will retry after 1.193225004s: waiting for domain to come up
	I0510 19:11:40.872446  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:11:40.872995  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | unable to find current IP address of domain kubernetes-upgrade-517660 in network mk-kubernetes-upgrade-517660
	I0510 19:11:40.873035  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | I0510 19:11:40.872963  436344 retry.go:31] will retry after 1.493128444s: waiting for domain to come up
	I0510 19:11:42.368727  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:11:42.369292  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | unable to find current IP address of domain kubernetes-upgrade-517660 in network mk-kubernetes-upgrade-517660
	I0510 19:11:42.369316  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | I0510 19:11:42.369203  436344 retry.go:31] will retry after 1.460508581s: waiting for domain to come up
	I0510 19:11:43.831853  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:11:43.832579  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | unable to find current IP address of domain kubernetes-upgrade-517660 in network mk-kubernetes-upgrade-517660
	I0510 19:11:43.832611  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | I0510 19:11:43.832535  436344 retry.go:31] will retry after 1.748932294s: waiting for domain to come up
	I0510 19:11:45.583432  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:11:45.583990  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | unable to find current IP address of domain kubernetes-upgrade-517660 in network mk-kubernetes-upgrade-517660
	I0510 19:11:45.584022  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | I0510 19:11:45.583951  436344 retry.go:31] will retry after 2.508400597s: waiting for domain to come up
	I0510 19:11:48.093705  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:11:48.094293  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | unable to find current IP address of domain kubernetes-upgrade-517660 in network mk-kubernetes-upgrade-517660
	I0510 19:11:48.094320  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | I0510 19:11:48.094266  436344 retry.go:31] will retry after 2.752068129s: waiting for domain to come up
	I0510 19:11:50.848483  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:11:50.848974  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | unable to find current IP address of domain kubernetes-upgrade-517660 in network mk-kubernetes-upgrade-517660
	I0510 19:11:50.849048  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | I0510 19:11:50.848962  436344 retry.go:31] will retry after 4.121777127s: waiting for domain to come up
	I0510 19:11:54.972390  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:11:54.972948  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | unable to find current IP address of domain kubernetes-upgrade-517660 in network mk-kubernetes-upgrade-517660
	I0510 19:11:54.972975  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | I0510 19:11:54.972911  436344 retry.go:31] will retry after 4.962251114s: waiting for domain to come up
	I0510 19:11:59.936429  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:11:59.937204  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) found domain IP: 192.168.72.244
	I0510 19:11:59.937235  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has current primary IP address 192.168.72.244 and MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:11:59.937244  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) reserving static IP address...
	I0510 19:11:59.937648  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-517660", mac: "52:54:00:1b:3b:ac", ip: "192.168.72.244"} in network mk-kubernetes-upgrade-517660
	I0510 19:12:00.028539  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | Getting to WaitForSSH function...
	I0510 19:12:00.028594  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) reserved static IP address 192.168.72.244 for domain kubernetes-upgrade-517660
	I0510 19:12:00.028618  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) waiting for SSH...
	I0510 19:12:00.031718  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:12:00.032371  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:3b:ac", ip: ""} in network mk-kubernetes-upgrade-517660: {Iface:virbr4 ExpiryTime:2025-05-10 20:11:51 +0000 UTC Type:0 Mac:52:54:00:1b:3b:ac Iaid: IPaddr:192.168.72.244 Prefix:24 Hostname:minikube Clientid:01:52:54:00:1b:3b:ac}
	I0510 19:12:00.032417  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined IP address 192.168.72.244 and MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:12:00.032600  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | Using SSH client type: external
	I0510 19:12:00.032637  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | Using SSH private key: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/kubernetes-upgrade-517660/id_rsa (-rw-------)
	I0510 19:12:00.032691  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.244 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20720-388787/.minikube/machines/kubernetes-upgrade-517660/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0510 19:12:00.032704  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | About to run SSH command:
	I0510 19:12:00.032716  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | exit 0
	I0510 19:12:00.159652  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | SSH cmd err, output: <nil>: 
	I0510 19:12:00.159922  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) KVM machine creation complete
	I0510 19:12:00.160171  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetConfigRaw
	I0510 19:12:00.160829  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .DriverName
	I0510 19:12:00.161035  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .DriverName
	I0510 19:12:00.161228  435640 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0510 19:12:00.161247  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetState
	I0510 19:12:00.162654  435640 main.go:141] libmachine: Detecting operating system of created instance...
	I0510 19:12:00.162677  435640 main.go:141] libmachine: Waiting for SSH to be available...
	I0510 19:12:00.162682  435640 main.go:141] libmachine: Getting to WaitForSSH function...
	I0510 19:12:00.162689  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHHostname
	I0510 19:12:00.164938  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:12:00.165340  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:3b:ac", ip: ""} in network mk-kubernetes-upgrade-517660: {Iface:virbr4 ExpiryTime:2025-05-10 20:11:51 +0000 UTC Type:0 Mac:52:54:00:1b:3b:ac Iaid: IPaddr:192.168.72.244 Prefix:24 Hostname:kubernetes-upgrade-517660 Clientid:01:52:54:00:1b:3b:ac}
	I0510 19:12:00.165388  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined IP address 192.168.72.244 and MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:12:00.165556  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHPort
	I0510 19:12:00.165769  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHKeyPath
	I0510 19:12:00.166007  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHKeyPath
	I0510 19:12:00.166152  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHUsername
	I0510 19:12:00.166321  435640 main.go:141] libmachine: Using SSH client type: native
	I0510 19:12:00.166628  435640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.72.244 22 <nil> <nil>}
	I0510 19:12:00.166650  435640 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0510 19:12:00.271123  435640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0510 19:12:00.271160  435640 main.go:141] libmachine: Detecting the provisioner...
	I0510 19:12:00.271172  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHHostname
	I0510 19:12:00.274281  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:12:00.274774  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:3b:ac", ip: ""} in network mk-kubernetes-upgrade-517660: {Iface:virbr4 ExpiryTime:2025-05-10 20:11:51 +0000 UTC Type:0 Mac:52:54:00:1b:3b:ac Iaid: IPaddr:192.168.72.244 Prefix:24 Hostname:kubernetes-upgrade-517660 Clientid:01:52:54:00:1b:3b:ac}
	I0510 19:12:00.274813  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined IP address 192.168.72.244 and MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:12:00.274965  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHPort
	I0510 19:12:00.275195  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHKeyPath
	I0510 19:12:00.275362  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHKeyPath
	I0510 19:12:00.275511  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHUsername
	I0510 19:12:00.275722  435640 main.go:141] libmachine: Using SSH client type: native
	I0510 19:12:00.276003  435640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.72.244 22 <nil> <nil>}
	I0510 19:12:00.276020  435640 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0510 19:12:00.382810  435640 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2024.11.2-dirty
	ID=buildroot
	VERSION_ID=2024.11.2
	PRETTY_NAME="Buildroot 2024.11.2"
	
	I0510 19:12:00.382934  435640 main.go:141] libmachine: found compatible host: buildroot
	I0510 19:12:00.382948  435640 main.go:141] libmachine: Provisioning with buildroot...
	I0510 19:12:00.382958  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetMachineName
	I0510 19:12:00.383228  435640 buildroot.go:166] provisioning hostname "kubernetes-upgrade-517660"
	I0510 19:12:00.383271  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetMachineName
	I0510 19:12:00.383469  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHHostname
	I0510 19:12:00.386506  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:12:00.386891  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:3b:ac", ip: ""} in network mk-kubernetes-upgrade-517660: {Iface:virbr4 ExpiryTime:2025-05-10 20:11:51 +0000 UTC Type:0 Mac:52:54:00:1b:3b:ac Iaid: IPaddr:192.168.72.244 Prefix:24 Hostname:kubernetes-upgrade-517660 Clientid:01:52:54:00:1b:3b:ac}
	I0510 19:12:00.386917  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined IP address 192.168.72.244 and MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:12:00.387055  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHPort
	I0510 19:12:00.387256  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHKeyPath
	I0510 19:12:00.387422  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHKeyPath
	I0510 19:12:00.387555  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHUsername
	I0510 19:12:00.387731  435640 main.go:141] libmachine: Using SSH client type: native
	I0510 19:12:00.387930  435640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.72.244 22 <nil> <nil>}
	I0510 19:12:00.387942  435640 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-517660 && echo "kubernetes-upgrade-517660" | sudo tee /etc/hostname
	I0510 19:12:00.528659  435640 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-517660
	
	I0510 19:12:00.528695  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHHostname
	I0510 19:12:00.531910  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:12:00.532347  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:3b:ac", ip: ""} in network mk-kubernetes-upgrade-517660: {Iface:virbr4 ExpiryTime:2025-05-10 20:11:51 +0000 UTC Type:0 Mac:52:54:00:1b:3b:ac Iaid: IPaddr:192.168.72.244 Prefix:24 Hostname:kubernetes-upgrade-517660 Clientid:01:52:54:00:1b:3b:ac}
	I0510 19:12:00.532375  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined IP address 192.168.72.244 and MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:12:00.532877  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHPort
	I0510 19:12:00.533087  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHKeyPath
	I0510 19:12:00.533269  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHKeyPath
	I0510 19:12:00.533422  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHUsername
	I0510 19:12:00.533597  435640 main.go:141] libmachine: Using SSH client type: native
	I0510 19:12:00.533866  435640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.72.244 22 <nil> <nil>}
	I0510 19:12:00.533894  435640 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-517660' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-517660/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-517660' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0510 19:12:00.652645  435640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0510 19:12:00.652687  435640 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20720-388787/.minikube CaCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20720-388787/.minikube}
	I0510 19:12:00.652727  435640 buildroot.go:174] setting up certificates
	I0510 19:12:00.652747  435640 provision.go:84] configureAuth start
	I0510 19:12:00.652764  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetMachineName
	I0510 19:12:00.653076  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetIP
	I0510 19:12:00.656224  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:12:00.656587  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:3b:ac", ip: ""} in network mk-kubernetes-upgrade-517660: {Iface:virbr4 ExpiryTime:2025-05-10 20:11:51 +0000 UTC Type:0 Mac:52:54:00:1b:3b:ac Iaid: IPaddr:192.168.72.244 Prefix:24 Hostname:kubernetes-upgrade-517660 Clientid:01:52:54:00:1b:3b:ac}
	I0510 19:12:00.656626  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined IP address 192.168.72.244 and MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:12:00.656846  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHHostname
	I0510 19:12:00.659876  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:12:00.660154  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:3b:ac", ip: ""} in network mk-kubernetes-upgrade-517660: {Iface:virbr4 ExpiryTime:2025-05-10 20:11:51 +0000 UTC Type:0 Mac:52:54:00:1b:3b:ac Iaid: IPaddr:192.168.72.244 Prefix:24 Hostname:kubernetes-upgrade-517660 Clientid:01:52:54:00:1b:3b:ac}
	I0510 19:12:00.660185  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined IP address 192.168.72.244 and MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:12:00.660336  435640 provision.go:143] copyHostCerts
	I0510 19:12:00.660413  435640 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem, removing ...
	I0510 19:12:00.660434  435640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem
	I0510 19:12:00.660507  435640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem (1078 bytes)
	I0510 19:12:00.660648  435640 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem, removing ...
	I0510 19:12:00.660662  435640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem
	I0510 19:12:00.660710  435640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem (1123 bytes)
	I0510 19:12:00.660821  435640 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem, removing ...
	I0510 19:12:00.660831  435640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem
	I0510 19:12:00.660860  435640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem (1675 bytes)
	I0510 19:12:00.660957  435640 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-517660 san=[127.0.0.1 192.168.72.244 kubernetes-upgrade-517660 localhost minikube]
	I0510 19:12:00.911905  435640 provision.go:177] copyRemoteCerts
	I0510 19:12:00.911986  435640 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0510 19:12:00.912020  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHHostname
	I0510 19:12:00.915073  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:12:00.915515  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:3b:ac", ip: ""} in network mk-kubernetes-upgrade-517660: {Iface:virbr4 ExpiryTime:2025-05-10 20:11:51 +0000 UTC Type:0 Mac:52:54:00:1b:3b:ac Iaid: IPaddr:192.168.72.244 Prefix:24 Hostname:kubernetes-upgrade-517660 Clientid:01:52:54:00:1b:3b:ac}
	I0510 19:12:00.915544  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined IP address 192.168.72.244 and MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:12:00.915750  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHPort
	I0510 19:12:00.915952  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHKeyPath
	I0510 19:12:00.916114  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHUsername
	I0510 19:12:00.916236  435640 sshutil.go:53] new ssh client: &{IP:192.168.72.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/kubernetes-upgrade-517660/id_rsa Username:docker}
	I0510 19:12:01.000415  435640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0510 19:12:01.034533  435640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0510 19:12:01.070632  435640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0510 19:12:01.106407  435640 provision.go:87] duration metric: took 453.640719ms to configureAuth
	I0510 19:12:01.106446  435640 buildroot.go:189] setting minikube options for container-runtime
	I0510 19:12:01.106661  435640 config.go:182] Loaded profile config "kubernetes-upgrade-517660": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0510 19:12:01.106767  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHHostname
	I0510 19:12:01.110526  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:12:01.110994  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:3b:ac", ip: ""} in network mk-kubernetes-upgrade-517660: {Iface:virbr4 ExpiryTime:2025-05-10 20:11:51 +0000 UTC Type:0 Mac:52:54:00:1b:3b:ac Iaid: IPaddr:192.168.72.244 Prefix:24 Hostname:kubernetes-upgrade-517660 Clientid:01:52:54:00:1b:3b:ac}
	I0510 19:12:01.111033  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined IP address 192.168.72.244 and MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:12:01.111204  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHPort
	I0510 19:12:01.111522  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHKeyPath
	I0510 19:12:01.111747  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHKeyPath
	I0510 19:12:01.111935  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHUsername
	I0510 19:12:01.112139  435640 main.go:141] libmachine: Using SSH client type: native
	I0510 19:12:01.112357  435640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.72.244 22 <nil> <nil>}
	I0510 19:12:01.112372  435640 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0510 19:12:01.375364  435640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0510 19:12:01.375415  435640 main.go:141] libmachine: Checking connection to Docker...
	I0510 19:12:01.375429  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetURL
	I0510 19:12:01.376794  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | using libvirt version 6000000
	I0510 19:12:01.379891  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:12:01.380315  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:3b:ac", ip: ""} in network mk-kubernetes-upgrade-517660: {Iface:virbr4 ExpiryTime:2025-05-10 20:11:51 +0000 UTC Type:0 Mac:52:54:00:1b:3b:ac Iaid: IPaddr:192.168.72.244 Prefix:24 Hostname:kubernetes-upgrade-517660 Clientid:01:52:54:00:1b:3b:ac}
	I0510 19:12:01.380353  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined IP address 192.168.72.244 and MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:12:01.380731  435640 main.go:141] libmachine: Docker is up and running!
	I0510 19:12:01.380752  435640 main.go:141] libmachine: Reticulating splines...
	I0510 19:12:01.380763  435640 client.go:171] duration metric: took 27.764927594s to LocalClient.Create
	I0510 19:12:01.380795  435640 start.go:167] duration metric: took 27.765016021s to libmachine.API.Create "kubernetes-upgrade-517660"
	I0510 19:12:01.380811  435640 start.go:293] postStartSetup for "kubernetes-upgrade-517660" (driver="kvm2")
	I0510 19:12:01.380838  435640 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0510 19:12:01.380871  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .DriverName
	I0510 19:12:01.381174  435640 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0510 19:12:01.381213  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHHostname
	I0510 19:12:01.383897  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:12:01.384314  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:3b:ac", ip: ""} in network mk-kubernetes-upgrade-517660: {Iface:virbr4 ExpiryTime:2025-05-10 20:11:51 +0000 UTC Type:0 Mac:52:54:00:1b:3b:ac Iaid: IPaddr:192.168.72.244 Prefix:24 Hostname:kubernetes-upgrade-517660 Clientid:01:52:54:00:1b:3b:ac}
	I0510 19:12:01.384344  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined IP address 192.168.72.244 and MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:12:01.384630  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHPort
	I0510 19:12:01.384877  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHKeyPath
	I0510 19:12:01.385062  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHUsername
	I0510 19:12:01.385227  435640 sshutil.go:53] new ssh client: &{IP:192.168.72.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/kubernetes-upgrade-517660/id_rsa Username:docker}
	I0510 19:12:01.478795  435640 ssh_runner.go:195] Run: cat /etc/os-release
	I0510 19:12:01.485472  435640 info.go:137] Remote host: Buildroot 2024.11.2
	I0510 19:12:01.485506  435640 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-388787/.minikube/addons for local assets ...
	I0510 19:12:01.485578  435640 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-388787/.minikube/files for local assets ...
	I0510 19:12:01.485669  435640 filesync.go:149] local asset: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem -> 3959802.pem in /etc/ssl/certs
	I0510 19:12:01.485823  435640 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0510 19:12:01.498898  435640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem --> /etc/ssl/certs/3959802.pem (1708 bytes)
	I0510 19:12:01.532852  435640 start.go:296] duration metric: took 152.024526ms for postStartSetup
	I0510 19:12:01.532937  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetConfigRaw
	I0510 19:12:01.533622  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetIP
	I0510 19:12:01.536330  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:12:01.536757  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:3b:ac", ip: ""} in network mk-kubernetes-upgrade-517660: {Iface:virbr4 ExpiryTime:2025-05-10 20:11:51 +0000 UTC Type:0 Mac:52:54:00:1b:3b:ac Iaid: IPaddr:192.168.72.244 Prefix:24 Hostname:kubernetes-upgrade-517660 Clientid:01:52:54:00:1b:3b:ac}
	I0510 19:12:01.536793  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined IP address 192.168.72.244 and MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:12:01.537140  435640 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kubernetes-upgrade-517660/config.json ...
	I0510 19:12:01.537369  435640 start.go:128] duration metric: took 27.944919413s to createHost
	I0510 19:12:01.537426  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHHostname
	I0510 19:12:01.540042  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:12:01.540433  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:3b:ac", ip: ""} in network mk-kubernetes-upgrade-517660: {Iface:virbr4 ExpiryTime:2025-05-10 20:11:51 +0000 UTC Type:0 Mac:52:54:00:1b:3b:ac Iaid: IPaddr:192.168.72.244 Prefix:24 Hostname:kubernetes-upgrade-517660 Clientid:01:52:54:00:1b:3b:ac}
	I0510 19:12:01.540461  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined IP address 192.168.72.244 and MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:12:01.540751  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHPort
	I0510 19:12:01.540977  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHKeyPath
	I0510 19:12:01.541205  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHKeyPath
	I0510 19:12:01.541365  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHUsername
	I0510 19:12:01.541570  435640 main.go:141] libmachine: Using SSH client type: native
	I0510 19:12:01.541843  435640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.72.244 22 <nil> <nil>}
	I0510 19:12:01.541856  435640 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0510 19:12:01.654468  435640 main.go:141] libmachine: SSH cmd err, output: <nil>: 1746904321.597498307
	
	I0510 19:12:01.654502  435640 fix.go:216] guest clock: 1746904321.597498307
	I0510 19:12:01.654513  435640 fix.go:229] Guest: 2025-05-10 19:12:01.597498307 +0000 UTC Remote: 2025-05-10 19:12:01.537383089 +0000 UTC m=+91.562741148 (delta=60.115218ms)
	I0510 19:12:01.654561  435640 fix.go:200] guest clock delta is within tolerance: 60.115218ms
	I0510 19:12:01.654569  435640 start.go:83] releasing machines lock for "kubernetes-upgrade-517660", held for 28.062335878s
	I0510 19:12:01.654604  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .DriverName
	I0510 19:12:01.654958  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetIP
	I0510 19:12:01.658217  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:12:01.658665  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:3b:ac", ip: ""} in network mk-kubernetes-upgrade-517660: {Iface:virbr4 ExpiryTime:2025-05-10 20:11:51 +0000 UTC Type:0 Mac:52:54:00:1b:3b:ac Iaid: IPaddr:192.168.72.244 Prefix:24 Hostname:kubernetes-upgrade-517660 Clientid:01:52:54:00:1b:3b:ac}
	I0510 19:12:01.658720  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined IP address 192.168.72.244 and MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:12:01.658971  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .DriverName
	I0510 19:12:01.659518  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .DriverName
	I0510 19:12:01.659708  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .DriverName
	I0510 19:12:01.659835  435640 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0510 19:12:01.659907  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHHostname
	I0510 19:12:01.659910  435640 ssh_runner.go:195] Run: cat /version.json
	I0510 19:12:01.659952  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHHostname
	I0510 19:12:01.663266  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:12:01.663482  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:12:01.663845  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:3b:ac", ip: ""} in network mk-kubernetes-upgrade-517660: {Iface:virbr4 ExpiryTime:2025-05-10 20:11:51 +0000 UTC Type:0 Mac:52:54:00:1b:3b:ac Iaid: IPaddr:192.168.72.244 Prefix:24 Hostname:kubernetes-upgrade-517660 Clientid:01:52:54:00:1b:3b:ac}
	I0510 19:12:01.663877  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined IP address 192.168.72.244 and MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:12:01.664035  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:3b:ac", ip: ""} in network mk-kubernetes-upgrade-517660: {Iface:virbr4 ExpiryTime:2025-05-10 20:11:51 +0000 UTC Type:0 Mac:52:54:00:1b:3b:ac Iaid: IPaddr:192.168.72.244 Prefix:24 Hostname:kubernetes-upgrade-517660 Clientid:01:52:54:00:1b:3b:ac}
	I0510 19:12:01.664063  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined IP address 192.168.72.244 and MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:12:01.664134  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHPort
	I0510 19:12:01.664221  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHPort
	I0510 19:12:01.664401  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHKeyPath
	I0510 19:12:01.664442  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHKeyPath
	I0510 19:12:01.664595  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHUsername
	I0510 19:12:01.664598  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHUsername
	I0510 19:12:01.664750  435640 sshutil.go:53] new ssh client: &{IP:192.168.72.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/kubernetes-upgrade-517660/id_rsa Username:docker}
	I0510 19:12:01.665253  435640 sshutil.go:53] new ssh client: &{IP:192.168.72.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/kubernetes-upgrade-517660/id_rsa Username:docker}
	I0510 19:12:01.746639  435640 ssh_runner.go:195] Run: systemctl --version
	I0510 19:12:01.781380  435640 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0510 19:12:01.961653  435640 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0510 19:12:01.970840  435640 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0510 19:12:01.970908  435640 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0510 19:12:02.000696  435640 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0510 19:12:02.000729  435640 start.go:495] detecting cgroup driver to use...
	I0510 19:12:02.000808  435640 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0510 19:12:02.028767  435640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0510 19:12:02.050416  435640 docker.go:225] disabling cri-docker service (if available) ...
	I0510 19:12:02.050477  435640 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0510 19:12:02.071610  435640 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0510 19:12:02.096538  435640 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0510 19:12:02.276966  435640 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0510 19:12:02.438186  435640 docker.go:241] disabling docker service ...
	I0510 19:12:02.438245  435640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0510 19:12:02.455898  435640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0510 19:12:02.475705  435640 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0510 19:12:02.682883  435640 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0510 19:12:02.838728  435640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0510 19:12:02.856009  435640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0510 19:12:02.879112  435640 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0510 19:12:02.879193  435640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:12:02.892615  435640 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0510 19:12:02.892709  435640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:12:02.906343  435640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:12:02.919517  435640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:12:02.932808  435640 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0510 19:12:02.946758  435640 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0510 19:12:02.959331  435640 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0510 19:12:02.959397  435640 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0510 19:12:02.976085  435640 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0510 19:12:02.988562  435640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 19:12:03.129266  435640 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0510 19:12:03.266035  435640 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0510 19:12:03.266133  435640 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0510 19:12:03.271570  435640 start.go:563] Will wait 60s for crictl version
	I0510 19:12:03.271644  435640 ssh_runner.go:195] Run: which crictl
	I0510 19:12:03.276302  435640 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0510 19:12:03.318925  435640 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0510 19:12:03.319024  435640 ssh_runner.go:195] Run: crio --version
	I0510 19:12:03.349409  435640 ssh_runner.go:195] Run: crio --version
	I0510 19:12:03.381260  435640 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0510 19:12:03.382533  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetIP
	I0510 19:12:03.385319  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:12:03.385719  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:3b:ac", ip: ""} in network mk-kubernetes-upgrade-517660: {Iface:virbr4 ExpiryTime:2025-05-10 20:11:51 +0000 UTC Type:0 Mac:52:54:00:1b:3b:ac Iaid: IPaddr:192.168.72.244 Prefix:24 Hostname:kubernetes-upgrade-517660 Clientid:01:52:54:00:1b:3b:ac}
	I0510 19:12:03.385749  435640 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined IP address 192.168.72.244 and MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:12:03.386014  435640 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0510 19:12:03.391634  435640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 19:12:03.408007  435640 kubeadm.go:875] updating cluster {Name:kubernetes-upgrade-517660 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-517660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.244 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0510 19:12:03.408107  435640 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0510 19:12:03.408169  435640 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 19:12:03.446439  435640 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0510 19:12:03.446521  435640 ssh_runner.go:195] Run: which lz4
	I0510 19:12:03.451312  435640 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0510 19:12:03.456388  435640 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0510 19:12:03.456427  435640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0510 19:12:05.303005  435640 crio.go:462] duration metric: took 1.851729884s to copy over tarball
	I0510 19:12:05.303105  435640 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0510 19:12:07.874893  435640 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.571748306s)
	I0510 19:12:07.874932  435640 crio.go:469] duration metric: took 2.571887442s to extract the tarball
	I0510 19:12:07.874940  435640 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0510 19:12:07.924503  435640 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 19:12:07.991459  435640 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0510 19:12:07.991499  435640 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0510 19:12:07.991595  435640 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0510 19:12:07.991624  435640 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:12:07.991635  435640 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0510 19:12:07.991650  435640 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0510 19:12:07.991599  435640 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:12:07.991606  435640 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0510 19:12:07.991663  435640 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:12:07.991699  435640 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:12:07.993410  435640 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:12:07.993493  435640 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0510 19:12:07.993446  435640 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0510 19:12:07.993839  435640 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0510 19:12:07.993413  435640 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:12:07.993412  435640 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:12:07.993890  435640 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:12:07.993889  435640 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0510 19:12:08.139840  435640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0510 19:12:08.157266  435640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:12:08.166476  435640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0510 19:12:08.167580  435640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:12:08.176528  435640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:12:08.179817  435640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0510 19:12:08.182883  435640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:12:08.224429  435640 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0510 19:12:08.224482  435640 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0510 19:12:08.224532  435640 ssh_runner.go:195] Run: which crictl
	I0510 19:12:08.357884  435640 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0510 19:12:08.357931  435640 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:12:08.357950  435640 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0510 19:12:08.357998  435640 ssh_runner.go:195] Run: which crictl
	I0510 19:12:08.358050  435640 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0510 19:12:08.358000  435640 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0510 19:12:08.358079  435640 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:12:08.358094  435640 ssh_runner.go:195] Run: which crictl
	I0510 19:12:08.358121  435640 ssh_runner.go:195] Run: which crictl
	I0510 19:12:08.363969  435640 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0510 19:12:08.364010  435640 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:12:08.364060  435640 ssh_runner.go:195] Run: which crictl
	I0510 19:12:08.393424  435640 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0510 19:12:08.393486  435640 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0510 19:12:08.393537  435640 ssh_runner.go:195] Run: which crictl
	I0510 19:12:08.393549  435640 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0510 19:12:08.393585  435640 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:12:08.393606  435640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0510 19:12:08.393637  435640 ssh_runner.go:195] Run: which crictl
	I0510 19:12:08.393725  435640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:12:08.393796  435640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:12:08.393839  435640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0510 19:12:08.393881  435640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:12:08.504887  435640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:12:08.539209  435640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:12:08.539287  435640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0510 19:12:08.539315  435640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0510 19:12:08.539317  435640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0510 19:12:08.539373  435640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:12:08.539405  435640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:12:08.612364  435640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:12:08.739065  435640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0510 19:12:08.739099  435640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:12:08.739210  435640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:12:08.739261  435640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0510 19:12:08.739325  435640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0510 19:12:08.739335  435640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:12:08.801954  435640 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0510 19:12:08.937098  435640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0510 19:12:08.941949  435640 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0510 19:12:08.942037  435640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:12:08.942061  435640 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0510 19:12:08.942175  435640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0510 19:12:08.942317  435640 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0510 19:12:08.946215  435640 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0510 19:12:09.130515  435640 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0510 19:12:09.130587  435640 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0510 19:12:09.130651  435640 cache_images.go:92] duration metric: took 1.139134192s to LoadCachedImages
	W0510 19:12:09.130733  435640 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0510 19:12:09.130748  435640 kubeadm.go:926] updating node { 192.168.72.244 8443 v1.20.0 crio true true} ...
	I0510 19:12:09.130844  435640 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-517660 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.244
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-517660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0510 19:12:09.130909  435640 ssh_runner.go:195] Run: crio config
	I0510 19:12:09.187925  435640 cni.go:84] Creating CNI manager for ""
	I0510 19:12:09.187950  435640 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0510 19:12:09.187964  435640 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0510 19:12:09.187983  435640 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.244 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-517660 NodeName:kubernetes-upgrade-517660 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.244"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.244 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0510 19:12:09.188113  435640 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.244
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-517660"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.244
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.244"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0510 19:12:09.188177  435640 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0510 19:12:09.201559  435640 binaries.go:44] Found k8s binaries, skipping transfer
	I0510 19:12:09.201679  435640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0510 19:12:09.214278  435640 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0510 19:12:09.237350  435640 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0510 19:12:09.260608  435640 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0510 19:12:09.285088  435640 ssh_runner.go:195] Run: grep 192.168.72.244	control-plane.minikube.internal$ /etc/hosts
	I0510 19:12:09.290389  435640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.244	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 19:12:09.306587  435640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 19:12:09.459613  435640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 19:12:09.502175  435640 certs.go:68] Setting up /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kubernetes-upgrade-517660 for IP: 192.168.72.244
	I0510 19:12:09.502207  435640 certs.go:194] generating shared ca certs ...
	I0510 19:12:09.502230  435640 certs.go:226] acquiring lock for ca certs: {Name:mk8db74782205da4ac57ef815dd495cda255251a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:12:09.502488  435640 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20720-388787/.minikube/ca.key
	I0510 19:12:09.502560  435640 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.key
	I0510 19:12:09.502576  435640 certs.go:256] generating profile certs ...
	I0510 19:12:09.502683  435640 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kubernetes-upgrade-517660/client.key
	I0510 19:12:09.502712  435640 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kubernetes-upgrade-517660/client.crt with IP's: []
	I0510 19:12:09.656102  435640 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kubernetes-upgrade-517660/client.crt ...
	I0510 19:12:09.656135  435640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kubernetes-upgrade-517660/client.crt: {Name:mk12e4d00e6b043ec9fed3d7802b9457b5fd6368 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:12:09.656355  435640 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kubernetes-upgrade-517660/client.key ...
	I0510 19:12:09.656383  435640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kubernetes-upgrade-517660/client.key: {Name:mk777038b97ca930edc4beebb71255404f8ea687 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:12:09.656528  435640 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kubernetes-upgrade-517660/apiserver.key.dcec53ca
	I0510 19:12:09.656547  435640 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kubernetes-upgrade-517660/apiserver.crt.dcec53ca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.244]
	I0510 19:12:10.174901  435640 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kubernetes-upgrade-517660/apiserver.crt.dcec53ca ...
	I0510 19:12:10.174945  435640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kubernetes-upgrade-517660/apiserver.crt.dcec53ca: {Name:mk125c26e1cced4a8515cc78435e3d34adffdb30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:12:10.175149  435640 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kubernetes-upgrade-517660/apiserver.key.dcec53ca ...
	I0510 19:12:10.175168  435640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kubernetes-upgrade-517660/apiserver.key.dcec53ca: {Name:mk8e0d41945edc6d7294c02aeb8621f62dafe4cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:12:10.175293  435640 certs.go:381] copying /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kubernetes-upgrade-517660/apiserver.crt.dcec53ca -> /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kubernetes-upgrade-517660/apiserver.crt
	I0510 19:12:10.175375  435640 certs.go:385] copying /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kubernetes-upgrade-517660/apiserver.key.dcec53ca -> /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kubernetes-upgrade-517660/apiserver.key
	I0510 19:12:10.175429  435640 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kubernetes-upgrade-517660/proxy-client.key
	I0510 19:12:10.175445  435640 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kubernetes-upgrade-517660/proxy-client.crt with IP's: []
	I0510 19:12:10.558238  435640 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kubernetes-upgrade-517660/proxy-client.crt ...
	I0510 19:12:10.558278  435640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kubernetes-upgrade-517660/proxy-client.crt: {Name:mk408080b3938ac756f884fbacd50034530e4eaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:12:10.558483  435640 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kubernetes-upgrade-517660/proxy-client.key ...
	I0510 19:12:10.558499  435640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kubernetes-upgrade-517660/proxy-client.key: {Name:mk604af317dcd986a5e2615b0daa7d5407a179be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:12:10.558701  435640 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980.pem (1338 bytes)
	W0510 19:12:10.558763  435640 certs.go:480] ignoring /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980_empty.pem, impossibly tiny 0 bytes
	I0510 19:12:10.558778  435640 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem (1679 bytes)
	I0510 19:12:10.558811  435640 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem (1078 bytes)
	I0510 19:12:10.558848  435640 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem (1123 bytes)
	I0510 19:12:10.558881  435640 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem (1675 bytes)
	I0510 19:12:10.558941  435640 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem (1708 bytes)
	I0510 19:12:10.559528  435640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0510 19:12:10.598443  435640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0510 19:12:10.632419  435640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0510 19:12:10.666507  435640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0510 19:12:10.705080  435640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kubernetes-upgrade-517660/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0510 19:12:10.741296  435640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kubernetes-upgrade-517660/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0510 19:12:10.779889  435640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kubernetes-upgrade-517660/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0510 19:12:10.819642  435640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kubernetes-upgrade-517660/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0510 19:12:10.855168  435640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980.pem --> /usr/share/ca-certificates/395980.pem (1338 bytes)
	I0510 19:12:10.885829  435640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem --> /usr/share/ca-certificates/3959802.pem (1708 bytes)
	I0510 19:12:10.924634  435640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0510 19:12:10.958407  435640 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0510 19:12:10.982857  435640 ssh_runner.go:195] Run: openssl version
	I0510 19:12:10.990157  435640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3959802.pem && ln -fs /usr/share/ca-certificates/3959802.pem /etc/ssl/certs/3959802.pem"
	I0510 19:12:11.004653  435640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3959802.pem
	I0510 19:12:11.010455  435640 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 10 18:00 /usr/share/ca-certificates/3959802.pem
	I0510 19:12:11.010537  435640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3959802.pem
	I0510 19:12:11.018289  435640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3959802.pem /etc/ssl/certs/3ec20f2e.0"
	I0510 19:12:11.031904  435640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0510 19:12:11.046632  435640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0510 19:12:11.052263  435640 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 10 17:52 /usr/share/ca-certificates/minikubeCA.pem
	I0510 19:12:11.052341  435640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0510 19:12:11.060554  435640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0510 19:12:11.085865  435640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/395980.pem && ln -fs /usr/share/ca-certificates/395980.pem /etc/ssl/certs/395980.pem"
	I0510 19:12:11.113140  435640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/395980.pem
	I0510 19:12:11.119641  435640 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 10 18:00 /usr/share/ca-certificates/395980.pem
	I0510 19:12:11.119721  435640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/395980.pem
	I0510 19:12:11.130379  435640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/395980.pem /etc/ssl/certs/51391683.0"
	I0510 19:12:11.148785  435640 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0510 19:12:11.155991  435640 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0510 19:12:11.156068  435640 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-517660 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-517660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.244 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 19:12:11.156165  435640 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0510 19:12:11.156231  435640 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0510 19:12:11.229402  435640 cri.go:89] found id: ""
	I0510 19:12:11.229493  435640 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0510 19:12:11.244844  435640 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0510 19:12:11.260197  435640 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0510 19:12:11.276403  435640 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0510 19:12:11.276434  435640 kubeadm.go:157] found existing configuration files:
	
	I0510 19:12:11.276492  435640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0510 19:12:11.289286  435640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0510 19:12:11.289356  435640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0510 19:12:11.306377  435640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0510 19:12:11.321906  435640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0510 19:12:11.322001  435640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0510 19:12:11.338811  435640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0510 19:12:11.352217  435640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0510 19:12:11.352286  435640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0510 19:12:11.365263  435640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0510 19:12:11.377595  435640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0510 19:12:11.377685  435640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0510 19:12:11.392010  435640 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0510 19:12:11.549129  435640 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0510 19:12:11.549326  435640 kubeadm.go:310] [preflight] Running pre-flight checks
	I0510 19:12:11.734379  435640 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0510 19:12:11.734526  435640 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0510 19:12:11.734662  435640 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0510 19:12:12.016798  435640 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0510 19:12:12.018734  435640 out.go:235]   - Generating certificates and keys ...
	I0510 19:12:12.018844  435640 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0510 19:12:12.018940  435640 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0510 19:12:12.182915  435640 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0510 19:12:12.406187  435640 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0510 19:12:12.525768  435640 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0510 19:12:12.600089  435640 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0510 19:12:12.772584  435640 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0510 19:12:12.773057  435640 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-517660 localhost] and IPs [192.168.72.244 127.0.0.1 ::1]
	I0510 19:12:13.013361  435640 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0510 19:12:13.013945  435640 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-517660 localhost] and IPs [192.168.72.244 127.0.0.1 ::1]
	I0510 19:12:13.265933  435640 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0510 19:12:13.332690  435640 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0510 19:12:13.447300  435640 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0510 19:12:13.447528  435640 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0510 19:12:13.663387  435640 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0510 19:12:13.850772  435640 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0510 19:12:14.075901  435640 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0510 19:12:14.264122  435640 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0510 19:12:14.302856  435640 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0510 19:12:14.306016  435640 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0510 19:12:14.306096  435640 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0510 19:12:14.475052  435640 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0510 19:12:14.477193  435640 out.go:235]   - Booting up control plane ...
	I0510 19:12:14.477341  435640 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0510 19:12:14.492276  435640 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0510 19:12:14.492419  435640 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0510 19:12:14.492568  435640 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0510 19:12:14.495582  435640 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0510 19:12:54.442979  435640 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0510 19:12:54.443984  435640 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:12:54.444422  435640 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:12:59.444139  435640 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:12:59.444466  435640 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:13:09.443712  435640 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:13:09.443996  435640 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:13:29.444063  435640 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:13:29.444258  435640 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:14:09.445981  435640 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:14:09.446369  435640 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:14:09.446398  435640 kubeadm.go:310] 
	I0510 19:14:09.446449  435640 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0510 19:14:09.446527  435640 kubeadm.go:310] 		timed out waiting for the condition
	I0510 19:14:09.446567  435640 kubeadm.go:310] 
	I0510 19:14:09.446638  435640 kubeadm.go:310] 	This error is likely caused by:
	I0510 19:14:09.446688  435640 kubeadm.go:310] 		- The kubelet is not running
	I0510 19:14:09.446835  435640 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0510 19:14:09.446846  435640 kubeadm.go:310] 
	I0510 19:14:09.447001  435640 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0510 19:14:09.447063  435640 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0510 19:14:09.447131  435640 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0510 19:14:09.447151  435640 kubeadm.go:310] 
	I0510 19:14:09.447328  435640 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0510 19:14:09.447455  435640 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0510 19:14:09.447468  435640 kubeadm.go:310] 
	I0510 19:14:09.447607  435640 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0510 19:14:09.447734  435640 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0510 19:14:09.447839  435640 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0510 19:14:09.447948  435640 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0510 19:14:09.447960  435640 kubeadm.go:310] 
	I0510 19:14:09.450134  435640 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0510 19:14:09.450259  435640 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0510 19:14:09.450382  435640 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0510 19:14:09.450576  435640 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-517660 localhost] and IPs [192.168.72.244 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-517660 localhost] and IPs [192.168.72.244 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-517660 localhost] and IPs [192.168.72.244 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-517660 localhost] and IPs [192.168.72.244 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0510 19:14:09.450624  435640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0510 19:14:11.489335  435640 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.038678781s)
	I0510 19:14:11.489429  435640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0510 19:14:11.506799  435640 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0510 19:14:11.519851  435640 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0510 19:14:11.519884  435640 kubeadm.go:157] found existing configuration files:
	
	I0510 19:14:11.519945  435640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0510 19:14:11.531528  435640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0510 19:14:11.531693  435640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0510 19:14:11.544527  435640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0510 19:14:11.557020  435640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0510 19:14:11.557096  435640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0510 19:14:11.569366  435640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0510 19:14:11.581111  435640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0510 19:14:11.581178  435640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0510 19:14:11.594750  435640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0510 19:14:11.607330  435640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0510 19:14:11.607421  435640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0510 19:14:11.620003  435640 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0510 19:14:11.690232  435640 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0510 19:14:11.690311  435640 kubeadm.go:310] [preflight] Running pre-flight checks
	I0510 19:14:11.841637  435640 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0510 19:14:11.841774  435640 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0510 19:14:11.841901  435640 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0510 19:14:12.056030  435640 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0510 19:14:12.058310  435640 out.go:235]   - Generating certificates and keys ...
	I0510 19:14:12.058416  435640 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0510 19:14:12.058489  435640 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0510 19:14:12.058599  435640 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0510 19:14:12.058679  435640 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0510 19:14:12.058798  435640 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0510 19:14:12.058883  435640 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0510 19:14:12.058994  435640 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0510 19:14:12.059423  435640 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0510 19:14:12.059850  435640 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0510 19:14:12.060381  435640 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0510 19:14:12.060441  435640 kubeadm.go:310] [certs] Using the existing "sa" key
	I0510 19:14:12.060502  435640 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0510 19:14:12.227526  435640 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0510 19:14:12.375403  435640 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0510 19:14:12.596860  435640 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0510 19:14:12.866083  435640 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0510 19:14:12.883745  435640 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0510 19:14:12.884832  435640 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0510 19:14:12.885058  435640 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0510 19:14:13.115435  435640 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0510 19:14:13.117514  435640 out.go:235]   - Booting up control plane ...
	I0510 19:14:13.117650  435640 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0510 19:14:13.131161  435640 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0510 19:14:13.132471  435640 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0510 19:14:13.133524  435640 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0510 19:14:13.140271  435640 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0510 19:14:53.141686  435640 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0510 19:14:53.141890  435640 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:14:53.142148  435640 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:14:58.142826  435640 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:14:58.143140  435640 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:15:08.143901  435640 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:15:08.144179  435640 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:15:28.145780  435640 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:15:28.146070  435640 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:16:08.149134  435640 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:16:08.149618  435640 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:16:08.149655  435640 kubeadm.go:310] 
	I0510 19:16:08.149723  435640 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0510 19:16:08.150154  435640 kubeadm.go:310] 		timed out waiting for the condition
	I0510 19:16:08.150172  435640 kubeadm.go:310] 
	I0510 19:16:08.150253  435640 kubeadm.go:310] 	This error is likely caused by:
	I0510 19:16:08.150308  435640 kubeadm.go:310] 		- The kubelet is not running
	I0510 19:16:08.150456  435640 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0510 19:16:08.150468  435640 kubeadm.go:310] 
	I0510 19:16:08.150620  435640 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0510 19:16:08.150686  435640 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0510 19:16:08.150747  435640 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0510 19:16:08.150773  435640 kubeadm.go:310] 
	I0510 19:16:08.150921  435640 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0510 19:16:08.151052  435640 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0510 19:16:08.151083  435640 kubeadm.go:310] 
	I0510 19:16:08.151282  435640 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0510 19:16:08.151410  435640 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0510 19:16:08.151550  435640 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0510 19:16:08.151668  435640 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0510 19:16:08.151682  435640 kubeadm.go:310] 
	I0510 19:16:08.155717  435640 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0510 19:16:08.155884  435640 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0510 19:16:08.155964  435640 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0510 19:16:08.156036  435640 kubeadm.go:394] duration metric: took 3m56.999973971s to StartCluster
	I0510 19:16:08.156079  435640 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:16:08.156142  435640 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:16:08.208918  435640 cri.go:89] found id: ""
	I0510 19:16:08.208969  435640 logs.go:282] 0 containers: []
	W0510 19:16:08.208980  435640 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:16:08.208990  435640 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:16:08.209073  435640 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:16:08.251307  435640 cri.go:89] found id: ""
	I0510 19:16:08.251341  435640 logs.go:282] 0 containers: []
	W0510 19:16:08.251354  435640 logs.go:284] No container was found matching "etcd"
	I0510 19:16:08.251362  435640 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:16:08.251439  435640 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:16:08.293606  435640 cri.go:89] found id: ""
	I0510 19:16:08.293641  435640 logs.go:282] 0 containers: []
	W0510 19:16:08.293659  435640 logs.go:284] No container was found matching "coredns"
	I0510 19:16:08.293670  435640 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:16:08.293745  435640 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:16:08.344055  435640 cri.go:89] found id: ""
	I0510 19:16:08.344091  435640 logs.go:282] 0 containers: []
	W0510 19:16:08.344103  435640 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:16:08.344112  435640 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:16:08.344185  435640 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:16:08.394854  435640 cri.go:89] found id: ""
	I0510 19:16:08.394898  435640 logs.go:282] 0 containers: []
	W0510 19:16:08.394912  435640 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:16:08.394921  435640 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:16:08.394992  435640 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:16:08.444142  435640 cri.go:89] found id: ""
	I0510 19:16:08.444170  435640 logs.go:282] 0 containers: []
	W0510 19:16:08.444178  435640 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:16:08.444184  435640 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:16:08.444249  435640 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:16:08.492742  435640 cri.go:89] found id: ""
	I0510 19:16:08.492770  435640 logs.go:282] 0 containers: []
	W0510 19:16:08.492778  435640 logs.go:284] No container was found matching "kindnet"
	I0510 19:16:08.492789  435640 logs.go:123] Gathering logs for dmesg ...
	I0510 19:16:08.492803  435640 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:16:08.512815  435640 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:16:08.512865  435640 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:16:08.652982  435640 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:16:08.653012  435640 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:16:08.653033  435640 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:16:08.760204  435640 logs.go:123] Gathering logs for container status ...
	I0510 19:16:08.760264  435640 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:16:08.811972  435640 logs.go:123] Gathering logs for kubelet ...
	I0510 19:16:08.812010  435640 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0510 19:16:08.871564  435640 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0510 19:16:08.871660  435640 out.go:270] * 
	* 
	W0510 19:16:08.871741  435640 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0510 19:16:08.871763  435640 out.go:270] * 
	* 
	W0510 19:16:08.872934  435640 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0510 19:16:08.876291  435640 out.go:201] 
	W0510 19:16:08.877661  435640 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0510 19:16:08.877731  435640 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0510 19:16:08.877763  435640 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0510 19:16:08.879376  435640 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-517660 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-517660
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-517660: (4.335578552s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-517660 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-517660 status --format={{.Host}}: exit status 7 (77.569858ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-517660 --memory=2200 --kubernetes-version=v1.33.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-517660 --memory=2200 --kubernetes-version=v1.33.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (43.024869221s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-517660 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-517660 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-517660 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (138.380271ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-517660] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20720
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20720-388787/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-388787/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.33.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-517660
	    minikube start -p kubernetes-upgrade-517660 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5176602 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.33.0, by running:
	    
	    minikube start -p kubernetes-upgrade-517660 --kubernetes-version=v1.33.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-517660 --memory=2200 --kubernetes-version=v1.33.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-517660 --memory=2200 --kubernetes-version=v1.33.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m9.549724394s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-05-10 19:18:06.180969931 +0000 UTC m=+5160.508582582
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-517660 -n kubernetes-upgrade-517660
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-517660 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-517660 logs -n 25: (2.010156575s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-065180                | NoKubernetes-065180       | jenkins | v1.35.0 | 10 May 25 19:13 UTC | 10 May 25 19:14 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p force-systemd-flag-525854          | force-systemd-flag-525854 | jenkins | v1.35.0 | 10 May 25 19:13 UTC | 10 May 25 19:14 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-317241                       | pause-317241              | jenkins | v1.35.0 | 10 May 25 19:13 UTC | 10 May 25 19:14 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-065180 sudo           | NoKubernetes-065180       | jenkins | v1.35.0 | 10 May 25 19:14 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-065180                | NoKubernetes-065180       | jenkins | v1.35.0 | 10 May 25 19:14 UTC | 10 May 25 19:14 UTC |
	| start   | -p force-systemd-env-429136           | force-systemd-env-429136  | jenkins | v1.35.0 | 10 May 25 19:14 UTC | 10 May 25 19:15 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-525854 ssh cat     | force-systemd-flag-525854 | jenkins | v1.35.0 | 10 May 25 19:14 UTC | 10 May 25 19:14 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-525854          | force-systemd-flag-525854 | jenkins | v1.35.0 | 10 May 25 19:14 UTC | 10 May 25 19:14 UTC |
	| start   | -p cert-expiration-355262             | cert-expiration-355262    | jenkins | v1.35.0 | 10 May 25 19:14 UTC | 10 May 25 19:15 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p pause-317241                       | pause-317241              | jenkins | v1.35.0 | 10 May 25 19:15 UTC | 10 May 25 19:15 UTC |
	| start   | -p stopped-upgrade-181866             | minikube                  | jenkins | v1.26.0 | 10 May 25 19:15 UTC | 10 May 25 19:16 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-429136           | force-systemd-env-429136  | jenkins | v1.35.0 | 10 May 25 19:15 UTC | 10 May 25 19:15 UTC |
	| start   | -p cert-options-178760                | cert-options-178760       | jenkins | v1.35.0 | 10 May 25 19:15 UTC | 10 May 25 19:16 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-517660          | kubernetes-upgrade-517660 | jenkins | v1.35.0 | 10 May 25 19:16 UTC | 10 May 25 19:16 UTC |
	| stop    | stopped-upgrade-181866 stop           | minikube                  | jenkins | v1.26.0 | 10 May 25 19:16 UTC | 10 May 25 19:16 UTC |
	| start   | -p kubernetes-upgrade-517660          | kubernetes-upgrade-517660 | jenkins | v1.35.0 | 10 May 25 19:16 UTC | 10 May 25 19:16 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-181866             | stopped-upgrade-181866    | jenkins | v1.35.0 | 10 May 25 19:16 UTC | 10 May 25 19:17 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-178760 ssh               | cert-options-178760       | jenkins | v1.35.0 | 10 May 25 19:16 UTC | 10 May 25 19:16 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-178760 -- sudo        | cert-options-178760       | jenkins | v1.35.0 | 10 May 25 19:16 UTC | 10 May 25 19:16 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-178760                | cert-options-178760       | jenkins | v1.35.0 | 10 May 25 19:16 UTC | 10 May 25 19:16 UTC |
	| start   | -p auto-380533 --memory=3072          | auto-380533               | jenkins | v1.35.0 | 10 May 25 19:16 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-517660          | kubernetes-upgrade-517660 | jenkins | v1.35.0 | 10 May 25 19:16 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-517660          | kubernetes-upgrade-517660 | jenkins | v1.35.0 | 10 May 25 19:16 UTC | 10 May 25 19:18 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-181866             | stopped-upgrade-181866    | jenkins | v1.35.0 | 10 May 25 19:17 UTC | 10 May 25 19:17 UTC |
	| start   | -p kindnet-380533                     | kindnet-380533            | jenkins | v1.35.0 | 10 May 25 19:17 UTC |                     |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/05/10 19:17:20
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0510 19:17:20.852465  441585 out.go:345] Setting OutFile to fd 1 ...
	I0510 19:17:20.852738  441585 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 19:17:20.852747  441585 out.go:358] Setting ErrFile to fd 2...
	I0510 19:17:20.852752  441585 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 19:17:20.853080  441585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-388787/.minikube/bin
	I0510 19:17:20.853784  441585 out.go:352] Setting JSON to false
	I0510 19:17:20.854949  441585 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":32389,"bootTime":1746872252,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 19:17:20.855050  441585 start.go:140] virtualization: kvm guest
	I0510 19:17:20.857281  441585 out.go:177] * [kindnet-380533] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 19:17:20.858973  441585 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 19:17:20.858966  441585 notify.go:220] Checking for updates...
	I0510 19:17:20.861719  441585 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 19:17:20.863097  441585 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-388787/kubeconfig
	I0510 19:17:20.864576  441585 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-388787/.minikube
	I0510 19:17:20.866062  441585 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 19:17:20.867545  441585 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 19:17:20.869441  441585 config.go:182] Loaded profile config "auto-380533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 19:17:20.869593  441585 config.go:182] Loaded profile config "cert-expiration-355262": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 19:17:20.869758  441585 config.go:182] Loaded profile config "kubernetes-upgrade-517660": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 19:17:20.869907  441585 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 19:17:20.913736  441585 out.go:177] * Using the kvm2 driver based on user configuration
	I0510 19:17:20.915134  441585 start.go:304] selected driver: kvm2
	I0510 19:17:20.915159  441585 start.go:908] validating driver "kvm2" against <nil>
	I0510 19:17:20.915173  441585 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 19:17:20.916119  441585 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 19:17:20.916224  441585 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20720-388787/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0510 19:17:20.935949  441585 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0510 19:17:20.936008  441585 start_flags.go:311] no existing cluster config was found, will generate one from the flags 
	I0510 19:17:20.936308  441585 start_flags.go:975] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0510 19:17:20.936341  441585 cni.go:84] Creating CNI manager for "kindnet"
	I0510 19:17:20.936347  441585 start_flags.go:320] Found "CNI" CNI - setting NetworkPlugin=cni
	I0510 19:17:20.936407  441585 start.go:347] cluster config:
	{Name:kindnet-380533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:kindnet-380533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 19:17:20.936537  441585 iso.go:125] acquiring lock: {Name:mk19640015999219180c6685480547adf0c02201 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 19:17:20.938783  441585 out.go:177] * Starting "kindnet-380533" primary control-plane node in "kindnet-380533" cluster
	I0510 19:17:20.827367  441211 machine.go:93] provisionDockerMachine start ...
	I0510 19:17:20.827401  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .DriverName
	I0510 19:17:20.827665  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHHostname
	I0510 19:17:20.830914  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:17:20.831451  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:3b:ac", ip: ""} in network mk-kubernetes-upgrade-517660: {Iface:virbr4 ExpiryTime:2025-05-10 20:16:26 +0000 UTC Type:0 Mac:52:54:00:1b:3b:ac Iaid: IPaddr:192.168.72.244 Prefix:24 Hostname:kubernetes-upgrade-517660 Clientid:01:52:54:00:1b:3b:ac}
	I0510 19:17:20.831483  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined IP address 192.168.72.244 and MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:17:20.831702  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHPort
	I0510 19:17:20.831900  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHKeyPath
	I0510 19:17:20.832082  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHKeyPath
	I0510 19:17:20.832246  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHUsername
	I0510 19:17:20.832443  441211 main.go:141] libmachine: Using SSH client type: native
	I0510 19:17:20.832765  441211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.72.244 22 <nil> <nil>}
	I0510 19:17:20.832779  441211 main.go:141] libmachine: About to run SSH command:
	hostname
	I0510 19:17:20.954411  441211 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-517660
	
	I0510 19:17:20.954449  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetMachineName
	I0510 19:17:20.954767  441211 buildroot.go:166] provisioning hostname "kubernetes-upgrade-517660"
	I0510 19:17:20.954804  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetMachineName
	I0510 19:17:20.955092  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHHostname
	I0510 19:17:20.958402  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:17:20.958793  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:3b:ac", ip: ""} in network mk-kubernetes-upgrade-517660: {Iface:virbr4 ExpiryTime:2025-05-10 20:16:26 +0000 UTC Type:0 Mac:52:54:00:1b:3b:ac Iaid: IPaddr:192.168.72.244 Prefix:24 Hostname:kubernetes-upgrade-517660 Clientid:01:52:54:00:1b:3b:ac}
	I0510 19:17:20.958820  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined IP address 192.168.72.244 and MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:17:20.959024  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHPort
	I0510 19:17:20.959225  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHKeyPath
	I0510 19:17:20.959414  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHKeyPath
	I0510 19:17:20.959580  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHUsername
	I0510 19:17:20.959776  441211 main.go:141] libmachine: Using SSH client type: native
	I0510 19:17:20.960040  441211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.72.244 22 <nil> <nil>}
	I0510 19:17:20.960054  441211 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-517660 && echo "kubernetes-upgrade-517660" | sudo tee /etc/hostname
	I0510 19:17:21.088669  441211 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-517660
	
	I0510 19:17:21.088712  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHHostname
	I0510 19:17:21.092124  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:17:21.092619  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:3b:ac", ip: ""} in network mk-kubernetes-upgrade-517660: {Iface:virbr4 ExpiryTime:2025-05-10 20:16:26 +0000 UTC Type:0 Mac:52:54:00:1b:3b:ac Iaid: IPaddr:192.168.72.244 Prefix:24 Hostname:kubernetes-upgrade-517660 Clientid:01:52:54:00:1b:3b:ac}
	I0510 19:17:21.092658  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined IP address 192.168.72.244 and MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:17:21.092836  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHPort
	I0510 19:17:21.093058  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHKeyPath
	I0510 19:17:21.093272  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHKeyPath
	I0510 19:17:21.093455  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHUsername
	I0510 19:17:21.093596  441211 main.go:141] libmachine: Using SSH client type: native
	I0510 19:17:21.093879  441211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.72.244 22 <nil> <nil>}
	I0510 19:17:21.093907  441211 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-517660' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-517660/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-517660' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0510 19:17:21.209356  441211 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0510 19:17:21.209405  441211 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20720-388787/.minikube CaCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20720-388787/.minikube}
	I0510 19:17:21.209433  441211 buildroot.go:174] setting up certificates
	I0510 19:17:21.209444  441211 provision.go:84] configureAuth start
	I0510 19:17:21.209458  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetMachineName
	I0510 19:17:21.209803  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetIP
	I0510 19:17:21.213186  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:17:21.213677  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:3b:ac", ip: ""} in network mk-kubernetes-upgrade-517660: {Iface:virbr4 ExpiryTime:2025-05-10 20:16:26 +0000 UTC Type:0 Mac:52:54:00:1b:3b:ac Iaid: IPaddr:192.168.72.244 Prefix:24 Hostname:kubernetes-upgrade-517660 Clientid:01:52:54:00:1b:3b:ac}
	I0510 19:17:21.213730  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined IP address 192.168.72.244 and MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:17:21.213920  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHHostname
	I0510 19:17:21.216555  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:17:21.216933  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:3b:ac", ip: ""} in network mk-kubernetes-upgrade-517660: {Iface:virbr4 ExpiryTime:2025-05-10 20:16:26 +0000 UTC Type:0 Mac:52:54:00:1b:3b:ac Iaid: IPaddr:192.168.72.244 Prefix:24 Hostname:kubernetes-upgrade-517660 Clientid:01:52:54:00:1b:3b:ac}
	I0510 19:17:21.216971  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined IP address 192.168.72.244 and MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:17:21.217148  441211 provision.go:143] copyHostCerts
	I0510 19:17:21.217230  441211 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem, removing ...
	I0510 19:17:21.217250  441211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem
	I0510 19:17:21.217321  441211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem (1078 bytes)
	I0510 19:17:21.217451  441211 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem, removing ...
	I0510 19:17:21.217463  441211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem
	I0510 19:17:21.217497  441211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem (1123 bytes)
	I0510 19:17:21.217660  441211 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem, removing ...
	I0510 19:17:21.217673  441211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem
	I0510 19:17:21.217720  441211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem (1675 bytes)
	I0510 19:17:21.217805  441211 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-517660 san=[127.0.0.1 192.168.72.244 kubernetes-upgrade-517660 localhost minikube]
	I0510 19:17:18.504621  440855 main.go:141] libmachine: (auto-380533) DBG | domain auto-380533 has defined MAC address 52:54:00:39:2d:dc in network mk-auto-380533
	I0510 19:17:18.505192  440855 main.go:141] libmachine: (auto-380533) found domain IP: 192.168.50.68
	I0510 19:17:18.505222  440855 main.go:141] libmachine: (auto-380533) DBG | domain auto-380533 has current primary IP address 192.168.50.68 and MAC address 52:54:00:39:2d:dc in network mk-auto-380533
	I0510 19:17:18.505231  440855 main.go:141] libmachine: (auto-380533) reserving static IP address...
	I0510 19:17:18.505622  440855 main.go:141] libmachine: (auto-380533) DBG | unable to find host DHCP lease matching {name: "auto-380533", mac: "52:54:00:39:2d:dc", ip: "192.168.50.68"} in network mk-auto-380533
	I0510 19:17:18.605197  440855 main.go:141] libmachine: (auto-380533) DBG | Getting to WaitForSSH function...
	I0510 19:17:18.605233  440855 main.go:141] libmachine: (auto-380533) reserved static IP address 192.168.50.68 for domain auto-380533
	I0510 19:17:18.605241  440855 main.go:141] libmachine: (auto-380533) waiting for SSH...
	I0510 19:17:18.609013  440855 main.go:141] libmachine: (auto-380533) DBG | domain auto-380533 has defined MAC address 52:54:00:39:2d:dc in network mk-auto-380533
	I0510 19:17:18.609549  440855 main.go:141] libmachine: (auto-380533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:2d:dc", ip: ""} in network mk-auto-380533: {Iface:virbr2 ExpiryTime:2025-05-10 20:17:11 +0000 UTC Type:0 Mac:52:54:00:39:2d:dc Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:minikube Clientid:01:52:54:00:39:2d:dc}
	I0510 19:17:18.609582  440855 main.go:141] libmachine: (auto-380533) DBG | domain auto-380533 has defined IP address 192.168.50.68 and MAC address 52:54:00:39:2d:dc in network mk-auto-380533
	I0510 19:17:18.610027  440855 main.go:141] libmachine: (auto-380533) DBG | Using SSH client type: external
	I0510 19:17:18.610052  440855 main.go:141] libmachine: (auto-380533) DBG | Using SSH private key: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/auto-380533/id_rsa (-rw-------)
	I0510 19:17:18.610086  440855 main.go:141] libmachine: (auto-380533) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.68 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20720-388787/.minikube/machines/auto-380533/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0510 19:17:18.610095  440855 main.go:141] libmachine: (auto-380533) DBG | About to run SSH command:
	I0510 19:17:18.610107  440855 main.go:141] libmachine: (auto-380533) DBG | exit 0
	I0510 19:17:18.748023  440855 main.go:141] libmachine: (auto-380533) DBG | SSH cmd err, output: <nil>: 
	I0510 19:17:18.748322  440855 main.go:141] libmachine: (auto-380533) KVM machine creation complete
	I0510 19:17:18.748686  440855 main.go:141] libmachine: (auto-380533) Calling .GetConfigRaw
	I0510 19:17:18.749249  440855 main.go:141] libmachine: (auto-380533) Calling .DriverName
	I0510 19:17:18.749450  440855 main.go:141] libmachine: (auto-380533) Calling .DriverName
	I0510 19:17:18.749633  440855 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0510 19:17:18.749662  440855 main.go:141] libmachine: (auto-380533) Calling .GetState
	I0510 19:17:18.751311  440855 main.go:141] libmachine: Detecting operating system of created instance...
	I0510 19:17:18.751340  440855 main.go:141] libmachine: Waiting for SSH to be available...
	I0510 19:17:18.751346  440855 main.go:141] libmachine: Getting to WaitForSSH function...
	I0510 19:17:18.751351  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHHostname
	I0510 19:17:18.753809  440855 main.go:141] libmachine: (auto-380533) DBG | domain auto-380533 has defined MAC address 52:54:00:39:2d:dc in network mk-auto-380533
	I0510 19:17:18.754210  440855 main.go:141] libmachine: (auto-380533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:2d:dc", ip: ""} in network mk-auto-380533: {Iface:virbr2 ExpiryTime:2025-05-10 20:17:11 +0000 UTC Type:0 Mac:52:54:00:39:2d:dc Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:auto-380533 Clientid:01:52:54:00:39:2d:dc}
	I0510 19:17:18.754235  440855 main.go:141] libmachine: (auto-380533) DBG | domain auto-380533 has defined IP address 192.168.50.68 and MAC address 52:54:00:39:2d:dc in network mk-auto-380533
	I0510 19:17:18.754392  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHPort
	I0510 19:17:18.754599  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHKeyPath
	I0510 19:17:18.754809  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHKeyPath
	I0510 19:17:18.754960  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHUsername
	I0510 19:17:18.755106  440855 main.go:141] libmachine: Using SSH client type: native
	I0510 19:17:18.755392  440855 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I0510 19:17:18.755407  440855 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0510 19:17:18.876168  440855 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0510 19:17:18.876195  440855 main.go:141] libmachine: Detecting the provisioner...
	I0510 19:17:18.876212  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHHostname
	I0510 19:17:18.879470  440855 main.go:141] libmachine: (auto-380533) DBG | domain auto-380533 has defined MAC address 52:54:00:39:2d:dc in network mk-auto-380533
	I0510 19:17:18.880125  440855 main.go:141] libmachine: (auto-380533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:2d:dc", ip: ""} in network mk-auto-380533: {Iface:virbr2 ExpiryTime:2025-05-10 20:17:11 +0000 UTC Type:0 Mac:52:54:00:39:2d:dc Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:auto-380533 Clientid:01:52:54:00:39:2d:dc}
	I0510 19:17:18.880165  440855 main.go:141] libmachine: (auto-380533) DBG | domain auto-380533 has defined IP address 192.168.50.68 and MAC address 52:54:00:39:2d:dc in network mk-auto-380533
	I0510 19:17:18.880591  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHPort
	I0510 19:17:18.880892  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHKeyPath
	I0510 19:17:18.881109  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHKeyPath
	I0510 19:17:18.881287  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHUsername
	I0510 19:17:18.881510  440855 main.go:141] libmachine: Using SSH client type: native
	I0510 19:17:18.881717  440855 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I0510 19:17:18.881728  440855 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0510 19:17:19.006406  440855 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2024.11.2-dirty
	ID=buildroot
	VERSION_ID=2024.11.2
	PRETTY_NAME="Buildroot 2024.11.2"
	
	I0510 19:17:19.006513  440855 main.go:141] libmachine: found compatible host: buildroot
	I0510 19:17:19.006529  440855 main.go:141] libmachine: Provisioning with buildroot...
	I0510 19:17:19.006540  440855 main.go:141] libmachine: (auto-380533) Calling .GetMachineName
	I0510 19:17:19.006902  440855 buildroot.go:166] provisioning hostname "auto-380533"
	I0510 19:17:19.006960  440855 main.go:141] libmachine: (auto-380533) Calling .GetMachineName
	I0510 19:17:19.007200  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHHostname
	I0510 19:17:19.010760  440855 main.go:141] libmachine: (auto-380533) DBG | domain auto-380533 has defined MAC address 52:54:00:39:2d:dc in network mk-auto-380533
	I0510 19:17:19.011262  440855 main.go:141] libmachine: (auto-380533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:2d:dc", ip: ""} in network mk-auto-380533: {Iface:virbr2 ExpiryTime:2025-05-10 20:17:11 +0000 UTC Type:0 Mac:52:54:00:39:2d:dc Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:auto-380533 Clientid:01:52:54:00:39:2d:dc}
	I0510 19:17:19.011311  440855 main.go:141] libmachine: (auto-380533) DBG | domain auto-380533 has defined IP address 192.168.50.68 and MAC address 52:54:00:39:2d:dc in network mk-auto-380533
	I0510 19:17:19.011578  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHPort
	I0510 19:17:19.011936  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHKeyPath
	I0510 19:17:19.012163  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHKeyPath
	I0510 19:17:19.012363  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHUsername
	I0510 19:17:19.012576  440855 main.go:141] libmachine: Using SSH client type: native
	I0510 19:17:19.012868  440855 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I0510 19:17:19.012886  440855 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-380533 && echo "auto-380533" | sudo tee /etc/hostname
	I0510 19:17:19.152441  440855 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-380533
	
	I0510 19:17:19.152482  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHHostname
	I0510 19:17:19.156091  440855 main.go:141] libmachine: (auto-380533) DBG | domain auto-380533 has defined MAC address 52:54:00:39:2d:dc in network mk-auto-380533
	I0510 19:17:19.156659  440855 main.go:141] libmachine: (auto-380533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:2d:dc", ip: ""} in network mk-auto-380533: {Iface:virbr2 ExpiryTime:2025-05-10 20:17:11 +0000 UTC Type:0 Mac:52:54:00:39:2d:dc Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:auto-380533 Clientid:01:52:54:00:39:2d:dc}
	I0510 19:17:19.156692  440855 main.go:141] libmachine: (auto-380533) DBG | domain auto-380533 has defined IP address 192.168.50.68 and MAC address 52:54:00:39:2d:dc in network mk-auto-380533
	I0510 19:17:19.156945  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHPort
	I0510 19:17:19.157146  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHKeyPath
	I0510 19:17:19.157372  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHKeyPath
	I0510 19:17:19.157576  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHUsername
	I0510 19:17:19.157871  440855 main.go:141] libmachine: Using SSH client type: native
	I0510 19:17:19.158124  440855 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I0510 19:17:19.158144  440855 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-380533' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-380533/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-380533' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0510 19:17:19.297691  440855 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0510 19:17:19.297736  440855 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20720-388787/.minikube CaCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20720-388787/.minikube}
	I0510 19:17:19.297873  440855 buildroot.go:174] setting up certificates
	I0510 19:17:19.297895  440855 provision.go:84] configureAuth start
	I0510 19:17:19.297923  440855 main.go:141] libmachine: (auto-380533) Calling .GetMachineName
	I0510 19:17:19.298244  440855 main.go:141] libmachine: (auto-380533) Calling .GetIP
	I0510 19:17:19.301960  440855 main.go:141] libmachine: (auto-380533) DBG | domain auto-380533 has defined MAC address 52:54:00:39:2d:dc in network mk-auto-380533
	I0510 19:17:19.302443  440855 main.go:141] libmachine: (auto-380533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:2d:dc", ip: ""} in network mk-auto-380533: {Iface:virbr2 ExpiryTime:2025-05-10 20:17:11 +0000 UTC Type:0 Mac:52:54:00:39:2d:dc Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:auto-380533 Clientid:01:52:54:00:39:2d:dc}
	I0510 19:17:19.302480  440855 main.go:141] libmachine: (auto-380533) DBG | domain auto-380533 has defined IP address 192.168.50.68 and MAC address 52:54:00:39:2d:dc in network mk-auto-380533
	I0510 19:17:19.302777  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHHostname
	I0510 19:17:19.305673  440855 main.go:141] libmachine: (auto-380533) DBG | domain auto-380533 has defined MAC address 52:54:00:39:2d:dc in network mk-auto-380533
	I0510 19:17:19.306182  440855 main.go:141] libmachine: (auto-380533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:2d:dc", ip: ""} in network mk-auto-380533: {Iface:virbr2 ExpiryTime:2025-05-10 20:17:11 +0000 UTC Type:0 Mac:52:54:00:39:2d:dc Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:auto-380533 Clientid:01:52:54:00:39:2d:dc}
	I0510 19:17:19.306225  440855 main.go:141] libmachine: (auto-380533) DBG | domain auto-380533 has defined IP address 192.168.50.68 and MAC address 52:54:00:39:2d:dc in network mk-auto-380533
	I0510 19:17:19.306450  440855 provision.go:143] copyHostCerts
	I0510 19:17:19.306538  440855 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem, removing ...
	I0510 19:17:19.306557  440855 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem
	I0510 19:17:19.306627  440855 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem (1078 bytes)
	I0510 19:17:19.306773  440855 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem, removing ...
	I0510 19:17:19.306786  440855 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem
	I0510 19:17:19.306820  440855 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem (1123 bytes)
	I0510 19:17:19.306944  440855 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem, removing ...
	I0510 19:17:19.306956  440855 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem
	I0510 19:17:19.306988  440855 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem (1675 bytes)
	I0510 19:17:19.307118  440855 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem org=jenkins.auto-380533 san=[127.0.0.1 192.168.50.68 auto-380533 localhost minikube]
	I0510 19:17:19.563058  440855 provision.go:177] copyRemoteCerts
	I0510 19:17:19.563131  440855 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0510 19:17:19.563166  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHHostname
	I0510 19:17:19.566204  440855 main.go:141] libmachine: (auto-380533) DBG | domain auto-380533 has defined MAC address 52:54:00:39:2d:dc in network mk-auto-380533
	I0510 19:17:19.566601  440855 main.go:141] libmachine: (auto-380533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:2d:dc", ip: ""} in network mk-auto-380533: {Iface:virbr2 ExpiryTime:2025-05-10 20:17:11 +0000 UTC Type:0 Mac:52:54:00:39:2d:dc Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:auto-380533 Clientid:01:52:54:00:39:2d:dc}
	I0510 19:17:19.566628  440855 main.go:141] libmachine: (auto-380533) DBG | domain auto-380533 has defined IP address 192.168.50.68 and MAC address 52:54:00:39:2d:dc in network mk-auto-380533
	I0510 19:17:19.566885  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHPort
	I0510 19:17:19.567089  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHKeyPath
	I0510 19:17:19.567265  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHUsername
	I0510 19:17:19.567440  440855 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/auto-380533/id_rsa Username:docker}
	I0510 19:17:19.662107  440855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0510 19:17:19.701329  440855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0510 19:17:19.737976  440855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0510 19:17:19.773673  440855 provision.go:87] duration metric: took 475.756585ms to configureAuth
	I0510 19:17:19.773704  440855 buildroot.go:189] setting minikube options for container-runtime
	I0510 19:17:19.773920  440855 config.go:182] Loaded profile config "auto-380533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 19:17:19.774029  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHHostname
	I0510 19:17:19.776661  440855 main.go:141] libmachine: (auto-380533) DBG | domain auto-380533 has defined MAC address 52:54:00:39:2d:dc in network mk-auto-380533
	I0510 19:17:19.776986  440855 main.go:141] libmachine: (auto-380533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:2d:dc", ip: ""} in network mk-auto-380533: {Iface:virbr2 ExpiryTime:2025-05-10 20:17:11 +0000 UTC Type:0 Mac:52:54:00:39:2d:dc Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:auto-380533 Clientid:01:52:54:00:39:2d:dc}
	I0510 19:17:19.777013  440855 main.go:141] libmachine: (auto-380533) DBG | domain auto-380533 has defined IP address 192.168.50.68 and MAC address 52:54:00:39:2d:dc in network mk-auto-380533
	I0510 19:17:19.777228  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHPort
	I0510 19:17:19.777433  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHKeyPath
	I0510 19:17:19.777669  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHKeyPath
	I0510 19:17:19.777844  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHUsername
	I0510 19:17:19.778143  440855 main.go:141] libmachine: Using SSH client type: native
	I0510 19:17:19.778450  440855 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I0510 19:17:19.778471  440855 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0510 19:17:20.055041  440855 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0510 19:17:20.055075  440855 main.go:141] libmachine: Checking connection to Docker...
	I0510 19:17:20.055085  440855 main.go:141] libmachine: (auto-380533) Calling .GetURL
	I0510 19:17:20.507275  440855 main.go:141] libmachine: (auto-380533) DBG | using libvirt version 6000000
	I0510 19:17:20.510123  440855 main.go:141] libmachine: (auto-380533) DBG | domain auto-380533 has defined MAC address 52:54:00:39:2d:dc in network mk-auto-380533
	I0510 19:17:20.510522  440855 main.go:141] libmachine: (auto-380533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:2d:dc", ip: ""} in network mk-auto-380533: {Iface:virbr2 ExpiryTime:2025-05-10 20:17:11 +0000 UTC Type:0 Mac:52:54:00:39:2d:dc Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:auto-380533 Clientid:01:52:54:00:39:2d:dc}
	I0510 19:17:20.510562  440855 main.go:141] libmachine: (auto-380533) DBG | domain auto-380533 has defined IP address 192.168.50.68 and MAC address 52:54:00:39:2d:dc in network mk-auto-380533
	I0510 19:17:20.510718  440855 main.go:141] libmachine: Docker is up and running!
	I0510 19:17:20.510737  440855 main.go:141] libmachine: Reticulating splines...
	I0510 19:17:20.510746  440855 client.go:171] duration metric: took 26.892917495s to LocalClient.Create
	I0510 19:17:20.510780  440855 start.go:167] duration metric: took 26.892997182s to libmachine.API.Create "auto-380533"
	I0510 19:17:20.510882  440855 start.go:293] postStartSetup for "auto-380533" (driver="kvm2")
	I0510 19:17:20.510907  440855 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0510 19:17:20.510963  440855 main.go:141] libmachine: (auto-380533) Calling .DriverName
	I0510 19:17:20.511294  440855 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0510 19:17:20.511326  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHHostname
	I0510 19:17:20.513952  440855 main.go:141] libmachine: (auto-380533) DBG | domain auto-380533 has defined MAC address 52:54:00:39:2d:dc in network mk-auto-380533
	I0510 19:17:20.514374  440855 main.go:141] libmachine: (auto-380533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:2d:dc", ip: ""} in network mk-auto-380533: {Iface:virbr2 ExpiryTime:2025-05-10 20:17:11 +0000 UTC Type:0 Mac:52:54:00:39:2d:dc Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:auto-380533 Clientid:01:52:54:00:39:2d:dc}
	I0510 19:17:20.514403  440855 main.go:141] libmachine: (auto-380533) DBG | domain auto-380533 has defined IP address 192.168.50.68 and MAC address 52:54:00:39:2d:dc in network mk-auto-380533
	I0510 19:17:20.514577  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHPort
	I0510 19:17:20.514789  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHKeyPath
	I0510 19:17:20.514943  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHUsername
	I0510 19:17:20.515209  440855 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/auto-380533/id_rsa Username:docker}
	I0510 19:17:20.608686  440855 ssh_runner.go:195] Run: cat /etc/os-release
	I0510 19:17:20.613827  440855 info.go:137] Remote host: Buildroot 2024.11.2
	I0510 19:17:20.613858  440855 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-388787/.minikube/addons for local assets ...
	I0510 19:17:20.613923  440855 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-388787/.minikube/files for local assets ...
	I0510 19:17:20.614013  440855 filesync.go:149] local asset: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem -> 3959802.pem in /etc/ssl/certs
	I0510 19:17:20.614105  440855 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0510 19:17:20.627102  440855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem --> /etc/ssl/certs/3959802.pem (1708 bytes)
	I0510 19:17:20.662984  440855 start.go:296] duration metric: took 152.080151ms for postStartSetup
	I0510 19:17:20.663051  440855 main.go:141] libmachine: (auto-380533) Calling .GetConfigRaw
	I0510 19:17:20.663713  440855 main.go:141] libmachine: (auto-380533) Calling .GetIP
	I0510 19:17:20.666767  440855 main.go:141] libmachine: (auto-380533) DBG | domain auto-380533 has defined MAC address 52:54:00:39:2d:dc in network mk-auto-380533
	I0510 19:17:20.667225  440855 main.go:141] libmachine: (auto-380533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:2d:dc", ip: ""} in network mk-auto-380533: {Iface:virbr2 ExpiryTime:2025-05-10 20:17:11 +0000 UTC Type:0 Mac:52:54:00:39:2d:dc Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:auto-380533 Clientid:01:52:54:00:39:2d:dc}
	I0510 19:17:20.667281  440855 main.go:141] libmachine: (auto-380533) DBG | domain auto-380533 has defined IP address 192.168.50.68 and MAC address 52:54:00:39:2d:dc in network mk-auto-380533
	I0510 19:17:20.667565  440855 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/config.json ...
	I0510 19:17:20.667811  440855 start.go:128] duration metric: took 27.071515178s to createHost
	I0510 19:17:20.667843  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHHostname
	I0510 19:17:20.670931  440855 main.go:141] libmachine: (auto-380533) DBG | domain auto-380533 has defined MAC address 52:54:00:39:2d:dc in network mk-auto-380533
	I0510 19:17:20.671465  440855 main.go:141] libmachine: (auto-380533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:2d:dc", ip: ""} in network mk-auto-380533: {Iface:virbr2 ExpiryTime:2025-05-10 20:17:11 +0000 UTC Type:0 Mac:52:54:00:39:2d:dc Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:auto-380533 Clientid:01:52:54:00:39:2d:dc}
	I0510 19:17:20.671519  440855 main.go:141] libmachine: (auto-380533) DBG | domain auto-380533 has defined IP address 192.168.50.68 and MAC address 52:54:00:39:2d:dc in network mk-auto-380533
	I0510 19:17:20.671658  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHPort
	I0510 19:17:20.672061  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHKeyPath
	I0510 19:17:20.672247  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHKeyPath
	I0510 19:17:20.672387  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHUsername
	I0510 19:17:20.672612  440855 main.go:141] libmachine: Using SSH client type: native
	I0510 19:17:20.673013  440855 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I0510 19:17:20.673036  440855 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0510 19:17:20.797341  440855 main.go:141] libmachine: SSH cmd err, output: <nil>: 1746904640.780045336
	
	I0510 19:17:20.797370  440855 fix.go:216] guest clock: 1746904640.780045336
	I0510 19:17:20.797390  440855 fix.go:229] Guest: 2025-05-10 19:17:20.780045336 +0000 UTC Remote: 2025-05-10 19:17:20.667826842 +0000 UTC m=+48.499461158 (delta=112.218494ms)
	I0510 19:17:20.797427  440855 fix.go:200] guest clock delta is within tolerance: 112.218494ms
	I0510 19:17:20.797433  440855 start.go:83] releasing machines lock for "auto-380533", held for 27.201299739s
	I0510 19:17:20.797461  440855 main.go:141] libmachine: (auto-380533) Calling .DriverName
	I0510 19:17:20.797819  440855 main.go:141] libmachine: (auto-380533) Calling .GetIP
	I0510 19:17:20.801215  440855 main.go:141] libmachine: (auto-380533) DBG | domain auto-380533 has defined MAC address 52:54:00:39:2d:dc in network mk-auto-380533
	I0510 19:17:20.801584  440855 main.go:141] libmachine: (auto-380533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:2d:dc", ip: ""} in network mk-auto-380533: {Iface:virbr2 ExpiryTime:2025-05-10 20:17:11 +0000 UTC Type:0 Mac:52:54:00:39:2d:dc Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:auto-380533 Clientid:01:52:54:00:39:2d:dc}
	I0510 19:17:20.801616  440855 main.go:141] libmachine: (auto-380533) DBG | domain auto-380533 has defined IP address 192.168.50.68 and MAC address 52:54:00:39:2d:dc in network mk-auto-380533
	I0510 19:17:20.801831  440855 main.go:141] libmachine: (auto-380533) Calling .DriverName
	I0510 19:17:20.803907  440855 main.go:141] libmachine: (auto-380533) Calling .DriverName
	I0510 19:17:20.804135  440855 main.go:141] libmachine: (auto-380533) Calling .DriverName
	I0510 19:17:20.804264  440855 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0510 19:17:20.804308  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHHostname
	I0510 19:17:20.804406  440855 ssh_runner.go:195] Run: cat /version.json
	I0510 19:17:20.804436  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHHostname
	I0510 19:17:20.807733  440855 main.go:141] libmachine: (auto-380533) DBG | domain auto-380533 has defined MAC address 52:54:00:39:2d:dc in network mk-auto-380533
	I0510 19:17:20.808726  440855 main.go:141] libmachine: (auto-380533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:2d:dc", ip: ""} in network mk-auto-380533: {Iface:virbr2 ExpiryTime:2025-05-10 20:17:11 +0000 UTC Type:0 Mac:52:54:00:39:2d:dc Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:auto-380533 Clientid:01:52:54:00:39:2d:dc}
	I0510 19:17:20.808785  440855 main.go:141] libmachine: (auto-380533) DBG | domain auto-380533 has defined IP address 192.168.50.68 and MAC address 52:54:00:39:2d:dc in network mk-auto-380533
	I0510 19:17:20.808819  440855 main.go:141] libmachine: (auto-380533) DBG | domain auto-380533 has defined MAC address 52:54:00:39:2d:dc in network mk-auto-380533
	I0510 19:17:20.808748  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHPort
	I0510 19:17:20.808974  440855 main.go:141] libmachine: (auto-380533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:2d:dc", ip: ""} in network mk-auto-380533: {Iface:virbr2 ExpiryTime:2025-05-10 20:17:11 +0000 UTC Type:0 Mac:52:54:00:39:2d:dc Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:auto-380533 Clientid:01:52:54:00:39:2d:dc}
	I0510 19:17:20.809159  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHKeyPath
	I0510 19:17:20.809235  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHPort
	I0510 19:17:20.809252  440855 main.go:141] libmachine: (auto-380533) DBG | domain auto-380533 has defined IP address 192.168.50.68 and MAC address 52:54:00:39:2d:dc in network mk-auto-380533
	I0510 19:17:20.809326  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHUsername
	I0510 19:17:20.809484  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHKeyPath
	I0510 19:17:20.809516  440855 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/auto-380533/id_rsa Username:docker}
	I0510 19:17:20.809651  440855 main.go:141] libmachine: (auto-380533) Calling .GetSSHUsername
	I0510 19:17:20.809855  440855 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/auto-380533/id_rsa Username:docker}
	I0510 19:17:20.901682  440855 ssh_runner.go:195] Run: systemctl --version
	I0510 19:17:20.927596  440855 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0510 19:17:21.099302  440855 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0510 19:17:21.106997  440855 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0510 19:17:21.107079  440855 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0510 19:17:21.128757  440855 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0510 19:17:21.128797  440855 start.go:495] detecting cgroup driver to use...
	I0510 19:17:21.128877  440855 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0510 19:17:21.149561  440855 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0510 19:17:21.168275  440855 docker.go:225] disabling cri-docker service (if available) ...
	I0510 19:17:21.168349  440855 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0510 19:17:21.186642  440855 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0510 19:17:21.205580  440855 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0510 19:17:21.365907  440855 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0510 19:17:21.540138  440855 docker.go:241] disabling docker service ...
	I0510 19:17:21.540236  440855 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0510 19:17:21.558909  440855 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0510 19:17:21.574931  440855 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0510 19:17:21.804933  440855 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0510 19:17:21.975108  440855 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0510 19:17:21.991871  440855 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0510 19:17:22.016681  440855 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0510 19:17:22.016752  440855 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:17:22.029311  440855 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0510 19:17:22.029388  440855 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:17:22.043930  440855 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:17:22.057672  440855 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:17:22.070494  440855 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0510 19:17:22.084722  440855 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:17:22.098464  440855 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:17:22.120769  440855 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:17:22.136454  440855 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0510 19:17:22.148233  440855 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0510 19:17:22.148305  440855 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0510 19:17:22.164785  440855 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0510 19:17:22.177946  440855 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 19:17:22.319866  440855 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0510 19:17:22.438216  440855 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0510 19:17:22.438298  440855 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0510 19:17:22.444075  440855 start.go:563] Will wait 60s for crictl version
	I0510 19:17:22.444144  440855 ssh_runner.go:195] Run: which crictl
	I0510 19:17:22.448472  440855 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0510 19:17:22.492262  440855 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0510 19:17:22.492342  440855 ssh_runner.go:195] Run: crio --version
	I0510 19:17:22.524613  440855 ssh_runner.go:195] Run: crio --version
	I0510 19:17:22.555868  440855 out.go:177] * Preparing Kubernetes v1.33.0 on CRI-O 1.29.1 ...
	I0510 19:17:20.940263  441585 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
	I0510 19:17:20.940331  441585 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4
	I0510 19:17:20.940344  441585 cache.go:56] Caching tarball of preloaded images
	I0510 19:17:20.940457  441585 preload.go:172] Found /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0510 19:17:20.940475  441585 cache.go:59] Finished verifying existence of preloaded tar for v1.33.0 on crio
	I0510 19:17:20.940626  441585 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kindnet-380533/config.json ...
	I0510 19:17:20.940671  441585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kindnet-380533/config.json: {Name:mk8b10d7f7bc1d1726dffab163c52bf1c3af8100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:17:20.940893  441585 start.go:360] acquireMachinesLock for kindnet-380533: {Name:mk11499d7756d503a7a24339ad1a7f9ab9dc0fab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0510 19:17:21.705977  441211 provision.go:177] copyRemoteCerts
	I0510 19:17:21.706043  441211 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0510 19:17:21.706073  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHHostname
	I0510 19:17:21.709339  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:17:21.709751  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:3b:ac", ip: ""} in network mk-kubernetes-upgrade-517660: {Iface:virbr4 ExpiryTime:2025-05-10 20:16:26 +0000 UTC Type:0 Mac:52:54:00:1b:3b:ac Iaid: IPaddr:192.168.72.244 Prefix:24 Hostname:kubernetes-upgrade-517660 Clientid:01:52:54:00:1b:3b:ac}
	I0510 19:17:21.709787  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined IP address 192.168.72.244 and MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:17:21.710014  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHPort
	I0510 19:17:21.710239  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHKeyPath
	I0510 19:17:21.710436  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHUsername
	I0510 19:17:21.710622  441211 sshutil.go:53] new ssh client: &{IP:192.168.72.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/kubernetes-upgrade-517660/id_rsa Username:docker}
	I0510 19:17:21.804656  441211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0510 19:17:21.839129  441211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0510 19:17:21.883474  441211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0510 19:17:21.925685  441211 provision.go:87] duration metric: took 716.22682ms to configureAuth
	I0510 19:17:21.925720  441211 buildroot.go:189] setting minikube options for container-runtime
	I0510 19:17:21.925926  441211 config.go:182] Loaded profile config "kubernetes-upgrade-517660": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 19:17:21.926025  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHHostname
	I0510 19:17:21.928879  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:17:21.929388  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:3b:ac", ip: ""} in network mk-kubernetes-upgrade-517660: {Iface:virbr4 ExpiryTime:2025-05-10 20:16:26 +0000 UTC Type:0 Mac:52:54:00:1b:3b:ac Iaid: IPaddr:192.168.72.244 Prefix:24 Hostname:kubernetes-upgrade-517660 Clientid:01:52:54:00:1b:3b:ac}
	I0510 19:17:21.929419  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined IP address 192.168.72.244 and MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:17:21.929638  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHPort
	I0510 19:17:21.929996  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHKeyPath
	I0510 19:17:21.930210  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHKeyPath
	I0510 19:17:21.930377  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHUsername
	I0510 19:17:21.930621  441211 main.go:141] libmachine: Using SSH client type: native
	I0510 19:17:21.930820  441211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.72.244 22 <nil> <nil>}
	I0510 19:17:21.930836  441211 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0510 19:17:22.557362  440855 main.go:141] libmachine: (auto-380533) Calling .GetIP
	I0510 19:17:22.560309  440855 main.go:141] libmachine: (auto-380533) DBG | domain auto-380533 has defined MAC address 52:54:00:39:2d:dc in network mk-auto-380533
	I0510 19:17:22.560628  440855 main.go:141] libmachine: (auto-380533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:2d:dc", ip: ""} in network mk-auto-380533: {Iface:virbr2 ExpiryTime:2025-05-10 20:17:11 +0000 UTC Type:0 Mac:52:54:00:39:2d:dc Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:auto-380533 Clientid:01:52:54:00:39:2d:dc}
	I0510 19:17:22.560656  440855 main.go:141] libmachine: (auto-380533) DBG | domain auto-380533 has defined IP address 192.168.50.68 and MAC address 52:54:00:39:2d:dc in network mk-auto-380533
	I0510 19:17:22.560953  440855 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0510 19:17:22.565923  440855 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 19:17:22.581926  440855 kubeadm.go:875] updating cluster {Name:auto-380533 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0
ClusterName:auto-380533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.68 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0510 19:17:22.582053  440855 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
	I0510 19:17:22.582120  440855 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 19:17:22.620977  440855 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.33.0". assuming images are not preloaded.
	I0510 19:17:22.621050  440855 ssh_runner.go:195] Run: which lz4
	I0510 19:17:22.626073  440855 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0510 19:17:22.631321  440855 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0510 19:17:22.631371  440855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (413217622 bytes)
	I0510 19:17:24.320632  440855 crio.go:462] duration metric: took 1.694590651s to copy over tarball
	I0510 19:17:24.320717  440855 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0510 19:17:26.339266  440855 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.018496232s)
	I0510 19:17:26.339300  440855 crio.go:469] duration metric: took 2.018627324s to extract the tarball
	I0510 19:17:26.339308  440855 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0510 19:17:26.382461  440855 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 19:17:26.431046  440855 crio.go:514] all images are preloaded for cri-o runtime.
	I0510 19:17:26.431072  440855 cache_images.go:84] Images are preloaded, skipping loading
	I0510 19:17:26.431080  440855 kubeadm.go:926] updating node { 192.168.50.68 8443 v1.33.0 crio true true} ...
	I0510 19:17:26.431176  440855 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-380533 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.68
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.0 ClusterName:auto-380533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0510 19:17:26.431258  440855 ssh_runner.go:195] Run: crio config
	I0510 19:17:26.480681  440855 cni.go:84] Creating CNI manager for ""
	I0510 19:17:26.480712  440855 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0510 19:17:26.480723  440855 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0510 19:17:26.480744  440855 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.68 APIServerPort:8443 KubernetesVersion:v1.33.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-380533 NodeName:auto-380533 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.68"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.68 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0510 19:17:26.480860  440855 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.68
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-380533"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.68"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.68"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0510 19:17:26.480934  440855 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.0
	I0510 19:17:26.493904  440855 binaries.go:44] Found k8s binaries, skipping transfer
	I0510 19:17:26.493974  440855 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0510 19:17:26.506731  440855 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0510 19:17:26.528053  440855 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0510 19:17:26.549135  440855 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2288 bytes)
	I0510 19:17:26.571011  440855 ssh_runner.go:195] Run: grep 192.168.50.68	control-plane.minikube.internal$ /etc/hosts
	I0510 19:17:26.575440  440855 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.68	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 19:17:26.590692  440855 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 19:17:26.733998  440855 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 19:17:26.768367  440855 certs.go:68] Setting up /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533 for IP: 192.168.50.68
	I0510 19:17:26.768395  440855 certs.go:194] generating shared ca certs ...
	I0510 19:17:26.768413  440855 certs.go:226] acquiring lock for ca certs: {Name:mk8db74782205da4ac57ef815dd495cda255251a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:17:26.768597  440855 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20720-388787/.minikube/ca.key
	I0510 19:17:26.768675  440855 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.key
	I0510 19:17:26.768692  440855 certs.go:256] generating profile certs ...
	I0510 19:17:26.768777  440855 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/client.key
	I0510 19:17:26.768795  440855 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/client.crt with IP's: []
	I0510 19:17:27.153204  440855 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/client.crt ...
	I0510 19:17:27.153236  440855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/client.crt: {Name:mk2bb97a501e099ea76b5997791a98f04dc7c32b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:17:27.153440  440855 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/client.key ...
	I0510 19:17:27.153464  440855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/client.key: {Name:mk1ff896ebe5988da94f6fbe3df18eb69919c748 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:17:27.153580  440855 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/apiserver.key.49359c44
	I0510 19:17:27.153599  440855 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/apiserver.crt.49359c44 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.68]
	I0510 19:17:27.203527  440855 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/apiserver.crt.49359c44 ...
	I0510 19:17:27.203562  440855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/apiserver.crt.49359c44: {Name:mk8271629e1a6fd17b657126dfca27ee03d0264f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:17:27.203797  440855 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/apiserver.key.49359c44 ...
	I0510 19:17:27.203822  440855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/apiserver.key.49359c44: {Name:mk57ff51788363c357b1b61fa15151be4bb317b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:17:27.203956  440855 certs.go:381] copying /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/apiserver.crt.49359c44 -> /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/apiserver.crt
	I0510 19:17:27.204083  440855 certs.go:385] copying /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/apiserver.key.49359c44 -> /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/apiserver.key
	I0510 19:17:27.204171  440855 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/proxy-client.key
	I0510 19:17:27.204194  440855 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/proxy-client.crt with IP's: []
	I0510 19:17:28.757409  441585 start.go:364] duration metric: took 7.816459274s to acquireMachinesLock for "kindnet-380533"
	I0510 19:17:28.757488  441585 start.go:93] Provisioning new machine with config: &{Name:kindnet-380533 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.33.0 ClusterName:kindnet-380533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0510 19:17:28.757624  441585 start.go:125] createHost starting for "" (driver="kvm2")
	I0510 19:17:28.759514  441585 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0510 19:17:28.759722  441585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:17:28.759804  441585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:17:28.777609  441585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35381
	I0510 19:17:28.778206  441585 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:17:28.778840  441585 main.go:141] libmachine: Using API Version  1
	I0510 19:17:28.778866  441585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:17:28.779344  441585 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:17:28.779551  441585 main.go:141] libmachine: (kindnet-380533) Calling .GetMachineName
	I0510 19:17:28.779731  441585 main.go:141] libmachine: (kindnet-380533) Calling .DriverName
	I0510 19:17:28.779890  441585 start.go:159] libmachine.API.Create for "kindnet-380533" (driver="kvm2")
	I0510 19:17:28.779925  441585 client.go:168] LocalClient.Create starting
	I0510 19:17:28.779969  441585 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem
	I0510 19:17:28.780023  441585 main.go:141] libmachine: Decoding PEM data...
	I0510 19:17:28.780046  441585 main.go:141] libmachine: Parsing certificate...
	I0510 19:17:28.780124  441585 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem
	I0510 19:17:28.780159  441585 main.go:141] libmachine: Decoding PEM data...
	I0510 19:17:28.780182  441585 main.go:141] libmachine: Parsing certificate...
	I0510 19:17:28.780218  441585 main.go:141] libmachine: Running pre-create checks...
	I0510 19:17:28.780230  441585 main.go:141] libmachine: (kindnet-380533) Calling .PreCreateCheck
	I0510 19:17:28.780653  441585 main.go:141] libmachine: (kindnet-380533) Calling .GetConfigRaw
	I0510 19:17:28.781175  441585 main.go:141] libmachine: Creating machine...
	I0510 19:17:28.781193  441585 main.go:141] libmachine: (kindnet-380533) Calling .Create
	I0510 19:17:28.781335  441585 main.go:141] libmachine: (kindnet-380533) creating KVM machine...
	I0510 19:17:28.781354  441585 main.go:141] libmachine: (kindnet-380533) creating network...
	I0510 19:17:28.782729  441585 main.go:141] libmachine: (kindnet-380533) DBG | found existing default KVM network
	I0510 19:17:28.784357  441585 main.go:141] libmachine: (kindnet-380533) DBG | I0510 19:17:28.784180  441660 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000013400}
	I0510 19:17:28.784380  441585 main.go:141] libmachine: (kindnet-380533) DBG | created network xml: 
	I0510 19:17:28.784394  441585 main.go:141] libmachine: (kindnet-380533) DBG | <network>
	I0510 19:17:28.784402  441585 main.go:141] libmachine: (kindnet-380533) DBG |   <name>mk-kindnet-380533</name>
	I0510 19:17:28.784412  441585 main.go:141] libmachine: (kindnet-380533) DBG |   <dns enable='no'/>
	I0510 19:17:28.784420  441585 main.go:141] libmachine: (kindnet-380533) DBG |   
	I0510 19:17:28.784449  441585 main.go:141] libmachine: (kindnet-380533) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0510 19:17:28.784460  441585 main.go:141] libmachine: (kindnet-380533) DBG |     <dhcp>
	I0510 19:17:28.784469  441585 main.go:141] libmachine: (kindnet-380533) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0510 19:17:28.784480  441585 main.go:141] libmachine: (kindnet-380533) DBG |     </dhcp>
	I0510 19:17:28.784489  441585 main.go:141] libmachine: (kindnet-380533) DBG |   </ip>
	I0510 19:17:28.784500  441585 main.go:141] libmachine: (kindnet-380533) DBG |   
	I0510 19:17:28.784508  441585 main.go:141] libmachine: (kindnet-380533) DBG | </network>
	I0510 19:17:28.784518  441585 main.go:141] libmachine: (kindnet-380533) DBG | 
	I0510 19:17:28.790618  441585 main.go:141] libmachine: (kindnet-380533) DBG | trying to create private KVM network mk-kindnet-380533 192.168.39.0/24...
	I0510 19:17:28.882503  441585 main.go:141] libmachine: (kindnet-380533) DBG | private KVM network mk-kindnet-380533 192.168.39.0/24 created
	I0510 19:17:28.882556  441585 main.go:141] libmachine: (kindnet-380533) DBG | I0510 19:17:28.882469  441660 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20720-388787/.minikube
	I0510 19:17:28.882570  441585 main.go:141] libmachine: (kindnet-380533) setting up store path in /home/jenkins/minikube-integration/20720-388787/.minikube/machines/kindnet-380533 ...
	I0510 19:17:28.882616  441585 main.go:141] libmachine: (kindnet-380533) building disk image from file:///home/jenkins/minikube-integration/20720-388787/.minikube/cache/iso/amd64/minikube-v1.35.0-1746739450-20720-amd64.iso
	I0510 19:17:28.882641  441585 main.go:141] libmachine: (kindnet-380533) Downloading /home/jenkins/minikube-integration/20720-388787/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20720-388787/.minikube/cache/iso/amd64/minikube-v1.35.0-1746739450-20720-amd64.iso...
	I0510 19:17:29.198339  441585 main.go:141] libmachine: (kindnet-380533) DBG | I0510 19:17:29.198182  441660 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/kindnet-380533/id_rsa...
	I0510 19:17:29.750433  441585 main.go:141] libmachine: (kindnet-380533) DBG | I0510 19:17:29.750259  441660 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/kindnet-380533/kindnet-380533.rawdisk...
	I0510 19:17:29.750474  441585 main.go:141] libmachine: (kindnet-380533) DBG | Writing magic tar header
	I0510 19:17:29.750499  441585 main.go:141] libmachine: (kindnet-380533) DBG | Writing SSH key tar header
	I0510 19:17:29.750512  441585 main.go:141] libmachine: (kindnet-380533) DBG | I0510 19:17:29.750376  441660 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20720-388787/.minikube/machines/kindnet-380533 ...
	I0510 19:17:29.750528  441585 main.go:141] libmachine: (kindnet-380533) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/kindnet-380533
	I0510 19:17:29.750538  441585 main.go:141] libmachine: (kindnet-380533) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20720-388787/.minikube/machines
	I0510 19:17:29.750557  441585 main.go:141] libmachine: (kindnet-380533) setting executable bit set on /home/jenkins/minikube-integration/20720-388787/.minikube/machines/kindnet-380533 (perms=drwx------)
	I0510 19:17:29.750574  441585 main.go:141] libmachine: (kindnet-380533) setting executable bit set on /home/jenkins/minikube-integration/20720-388787/.minikube/machines (perms=drwxr-xr-x)
	I0510 19:17:29.750588  441585 main.go:141] libmachine: (kindnet-380533) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20720-388787/.minikube
	I0510 19:17:29.750609  441585 main.go:141] libmachine: (kindnet-380533) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20720-388787
	I0510 19:17:29.750623  441585 main.go:141] libmachine: (kindnet-380533) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0510 19:17:29.750634  441585 main.go:141] libmachine: (kindnet-380533) DBG | checking permissions on dir: /home/jenkins
	I0510 19:17:29.750641  441585 main.go:141] libmachine: (kindnet-380533) DBG | checking permissions on dir: /home
	I0510 19:17:29.750652  441585 main.go:141] libmachine: (kindnet-380533) setting executable bit set on /home/jenkins/minikube-integration/20720-388787/.minikube (perms=drwxr-xr-x)
	I0510 19:17:29.750663  441585 main.go:141] libmachine: (kindnet-380533) setting executable bit set on /home/jenkins/minikube-integration/20720-388787 (perms=drwxrwxr-x)
	I0510 19:17:29.750676  441585 main.go:141] libmachine: (kindnet-380533) DBG | skipping /home - not owner
	I0510 19:17:29.750692  441585 main.go:141] libmachine: (kindnet-380533) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0510 19:17:29.750703  441585 main.go:141] libmachine: (kindnet-380533) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0510 19:17:29.750715  441585 main.go:141] libmachine: (kindnet-380533) creating domain...
	I0510 19:17:29.752143  441585 main.go:141] libmachine: (kindnet-380533) define libvirt domain using xml: 
	I0510 19:17:29.752172  441585 main.go:141] libmachine: (kindnet-380533) <domain type='kvm'>
	I0510 19:17:29.752182  441585 main.go:141] libmachine: (kindnet-380533)   <name>kindnet-380533</name>
	I0510 19:17:29.752194  441585 main.go:141] libmachine: (kindnet-380533)   <memory unit='MiB'>3072</memory>
	I0510 19:17:29.752203  441585 main.go:141] libmachine: (kindnet-380533)   <vcpu>2</vcpu>
	I0510 19:17:29.752216  441585 main.go:141] libmachine: (kindnet-380533)   <features>
	I0510 19:17:29.752226  441585 main.go:141] libmachine: (kindnet-380533)     <acpi/>
	I0510 19:17:29.752232  441585 main.go:141] libmachine: (kindnet-380533)     <apic/>
	I0510 19:17:29.752256  441585 main.go:141] libmachine: (kindnet-380533)     <pae/>
	I0510 19:17:29.752265  441585 main.go:141] libmachine: (kindnet-380533)     
	I0510 19:17:29.752273  441585 main.go:141] libmachine: (kindnet-380533)   </features>
	I0510 19:17:29.752282  441585 main.go:141] libmachine: (kindnet-380533)   <cpu mode='host-passthrough'>
	I0510 19:17:29.752290  441585 main.go:141] libmachine: (kindnet-380533)   
	I0510 19:17:29.752299  441585 main.go:141] libmachine: (kindnet-380533)   </cpu>
	I0510 19:17:29.752337  441585 main.go:141] libmachine: (kindnet-380533)   <os>
	I0510 19:17:29.752376  441585 main.go:141] libmachine: (kindnet-380533)     <type>hvm</type>
	I0510 19:17:29.752392  441585 main.go:141] libmachine: (kindnet-380533)     <boot dev='cdrom'/>
	I0510 19:17:29.752403  441585 main.go:141] libmachine: (kindnet-380533)     <boot dev='hd'/>
	I0510 19:17:29.752414  441585 main.go:141] libmachine: (kindnet-380533)     <bootmenu enable='no'/>
	I0510 19:17:29.752423  441585 main.go:141] libmachine: (kindnet-380533)   </os>
	I0510 19:17:29.752432  441585 main.go:141] libmachine: (kindnet-380533)   <devices>
	I0510 19:17:29.752449  441585 main.go:141] libmachine: (kindnet-380533)     <disk type='file' device='cdrom'>
	I0510 19:17:29.752466  441585 main.go:141] libmachine: (kindnet-380533)       <source file='/home/jenkins/minikube-integration/20720-388787/.minikube/machines/kindnet-380533/boot2docker.iso'/>
	I0510 19:17:29.752477  441585 main.go:141] libmachine: (kindnet-380533)       <target dev='hdc' bus='scsi'/>
	I0510 19:17:29.752488  441585 main.go:141] libmachine: (kindnet-380533)       <readonly/>
	I0510 19:17:29.752515  441585 main.go:141] libmachine: (kindnet-380533)     </disk>
	I0510 19:17:29.752549  441585 main.go:141] libmachine: (kindnet-380533)     <disk type='file' device='disk'>
	I0510 19:17:29.752572  441585 main.go:141] libmachine: (kindnet-380533)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0510 19:17:29.752592  441585 main.go:141] libmachine: (kindnet-380533)       <source file='/home/jenkins/minikube-integration/20720-388787/.minikube/machines/kindnet-380533/kindnet-380533.rawdisk'/>
	I0510 19:17:29.752604  441585 main.go:141] libmachine: (kindnet-380533)       <target dev='hda' bus='virtio'/>
	I0510 19:17:29.752615  441585 main.go:141] libmachine: (kindnet-380533)     </disk>
	I0510 19:17:29.752627  441585 main.go:141] libmachine: (kindnet-380533)     <interface type='network'>
	I0510 19:17:29.752644  441585 main.go:141] libmachine: (kindnet-380533)       <source network='mk-kindnet-380533'/>
	I0510 19:17:29.752663  441585 main.go:141] libmachine: (kindnet-380533)       <model type='virtio'/>
	I0510 19:17:29.752674  441585 main.go:141] libmachine: (kindnet-380533)     </interface>
	I0510 19:17:29.752693  441585 main.go:141] libmachine: (kindnet-380533)     <interface type='network'>
	I0510 19:17:29.752703  441585 main.go:141] libmachine: (kindnet-380533)       <source network='default'/>
	I0510 19:17:29.752727  441585 main.go:141] libmachine: (kindnet-380533)       <model type='virtio'/>
	I0510 19:17:29.752742  441585 main.go:141] libmachine: (kindnet-380533)     </interface>
	I0510 19:17:29.752755  441585 main.go:141] libmachine: (kindnet-380533)     <serial type='pty'>
	I0510 19:17:29.752766  441585 main.go:141] libmachine: (kindnet-380533)       <target port='0'/>
	I0510 19:17:29.752778  441585 main.go:141] libmachine: (kindnet-380533)     </serial>
	I0510 19:17:29.752789  441585 main.go:141] libmachine: (kindnet-380533)     <console type='pty'>
	I0510 19:17:29.752799  441585 main.go:141] libmachine: (kindnet-380533)       <target type='serial' port='0'/>
	I0510 19:17:29.752813  441585 main.go:141] libmachine: (kindnet-380533)     </console>
	I0510 19:17:29.752823  441585 main.go:141] libmachine: (kindnet-380533)     <rng model='virtio'>
	I0510 19:17:29.752835  441585 main.go:141] libmachine: (kindnet-380533)       <backend model='random'>/dev/random</backend>
	I0510 19:17:29.752844  441585 main.go:141] libmachine: (kindnet-380533)     </rng>
	I0510 19:17:29.752853  441585 main.go:141] libmachine: (kindnet-380533)     
	I0510 19:17:29.752862  441585 main.go:141] libmachine: (kindnet-380533)     
	I0510 19:17:29.752872  441585 main.go:141] libmachine: (kindnet-380533)   </devices>
	I0510 19:17:29.752892  441585 main.go:141] libmachine: (kindnet-380533) </domain>
	I0510 19:17:29.752906  441585 main.go:141] libmachine: (kindnet-380533) 
	I0510 19:17:29.757862  441585 main.go:141] libmachine: (kindnet-380533) DBG | domain kindnet-380533 has defined MAC address 52:54:00:da:8a:18 in network default
	I0510 19:17:29.758575  441585 main.go:141] libmachine: (kindnet-380533) starting domain...
	I0510 19:17:29.758601  441585 main.go:141] libmachine: (kindnet-380533) DBG | domain kindnet-380533 has defined MAC address 52:54:00:6d:01:25 in network mk-kindnet-380533
	I0510 19:17:29.758609  441585 main.go:141] libmachine: (kindnet-380533) ensuring networks are active...
	I0510 19:17:29.759394  441585 main.go:141] libmachine: (kindnet-380533) Ensuring network default is active
	I0510 19:17:29.759865  441585 main.go:141] libmachine: (kindnet-380533) Ensuring network mk-kindnet-380533 is active
	I0510 19:17:29.760520  441585 main.go:141] libmachine: (kindnet-380533) getting domain XML...
	I0510 19:17:29.761435  441585 main.go:141] libmachine: (kindnet-380533) creating domain...
	I0510 19:17:28.496932  441211 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0510 19:17:28.496959  441211 machine.go:96] duration metric: took 7.669573393s to provisionDockerMachine
	I0510 19:17:28.496973  441211 start.go:293] postStartSetup for "kubernetes-upgrade-517660" (driver="kvm2")
	I0510 19:17:28.496988  441211 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0510 19:17:28.497034  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .DriverName
	I0510 19:17:28.497427  441211 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0510 19:17:28.497466  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHHostname
	I0510 19:17:28.500606  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:17:28.501009  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:3b:ac", ip: ""} in network mk-kubernetes-upgrade-517660: {Iface:virbr4 ExpiryTime:2025-05-10 20:16:26 +0000 UTC Type:0 Mac:52:54:00:1b:3b:ac Iaid: IPaddr:192.168.72.244 Prefix:24 Hostname:kubernetes-upgrade-517660 Clientid:01:52:54:00:1b:3b:ac}
	I0510 19:17:28.501038  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined IP address 192.168.72.244 and MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:17:28.501189  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHPort
	I0510 19:17:28.501388  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHKeyPath
	I0510 19:17:28.501587  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHUsername
	I0510 19:17:28.501787  441211 sshutil.go:53] new ssh client: &{IP:192.168.72.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/kubernetes-upgrade-517660/id_rsa Username:docker}
	I0510 19:17:28.590797  441211 ssh_runner.go:195] Run: cat /etc/os-release
	I0510 19:17:28.596347  441211 info.go:137] Remote host: Buildroot 2024.11.2
	I0510 19:17:28.596375  441211 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-388787/.minikube/addons for local assets ...
	I0510 19:17:28.596448  441211 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-388787/.minikube/files for local assets ...
	I0510 19:17:28.596538  441211 filesync.go:149] local asset: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem -> 3959802.pem in /etc/ssl/certs
	I0510 19:17:28.596667  441211 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0510 19:17:28.609737  441211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem --> /etc/ssl/certs/3959802.pem (1708 bytes)
	I0510 19:17:28.641080  441211 start.go:296] duration metric: took 144.085943ms for postStartSetup
	I0510 19:17:28.641128  441211 fix.go:56] duration metric: took 7.843542602s for fixHost
	I0510 19:17:28.641163  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHHostname
	I0510 19:17:28.644742  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:17:28.645476  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:3b:ac", ip: ""} in network mk-kubernetes-upgrade-517660: {Iface:virbr4 ExpiryTime:2025-05-10 20:16:26 +0000 UTC Type:0 Mac:52:54:00:1b:3b:ac Iaid: IPaddr:192.168.72.244 Prefix:24 Hostname:kubernetes-upgrade-517660 Clientid:01:52:54:00:1b:3b:ac}
	I0510 19:17:28.645535  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined IP address 192.168.72.244 and MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:17:28.645885  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHPort
	I0510 19:17:28.646144  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHKeyPath
	I0510 19:17:28.646381  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHKeyPath
	I0510 19:17:28.646564  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHUsername
	I0510 19:17:28.646782  441211 main.go:141] libmachine: Using SSH client type: native
	I0510 19:17:28.647083  441211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.72.244 22 <nil> <nil>}
	I0510 19:17:28.647105  441211 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0510 19:17:28.757197  441211 main.go:141] libmachine: SSH cmd err, output: <nil>: 1746904648.753559656
	
	I0510 19:17:28.757230  441211 fix.go:216] guest clock: 1746904648.753559656
	I0510 19:17:28.757242  441211 fix.go:229] Guest: 2025-05-10 19:17:28.753559656 +0000 UTC Remote: 2025-05-10 19:17:28.641132184 +0000 UTC m=+32.004744075 (delta=112.427472ms)
	I0510 19:17:28.757296  441211 fix.go:200] guest clock delta is within tolerance: 112.427472ms
	I0510 19:17:28.757303  441211 start.go:83] releasing machines lock for "kubernetes-upgrade-517660", held for 7.959747231s
	I0510 19:17:28.757330  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .DriverName
	I0510 19:17:28.757628  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetIP
	I0510 19:17:28.761062  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:17:28.761446  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:3b:ac", ip: ""} in network mk-kubernetes-upgrade-517660: {Iface:virbr4 ExpiryTime:2025-05-10 20:16:26 +0000 UTC Type:0 Mac:52:54:00:1b:3b:ac Iaid: IPaddr:192.168.72.244 Prefix:24 Hostname:kubernetes-upgrade-517660 Clientid:01:52:54:00:1b:3b:ac}
	I0510 19:17:28.761476  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined IP address 192.168.72.244 and MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:17:28.761615  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .DriverName
	I0510 19:17:28.762194  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .DriverName
	I0510 19:17:28.762373  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .DriverName
	I0510 19:17:28.762491  441211 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0510 19:17:28.762550  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHHostname
	I0510 19:17:28.762587  441211 ssh_runner.go:195] Run: cat /version.json
	I0510 19:17:28.762616  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHHostname
	I0510 19:17:28.765508  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:17:28.765734  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:17:28.765961  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:3b:ac", ip: ""} in network mk-kubernetes-upgrade-517660: {Iface:virbr4 ExpiryTime:2025-05-10 20:16:26 +0000 UTC Type:0 Mac:52:54:00:1b:3b:ac Iaid: IPaddr:192.168.72.244 Prefix:24 Hostname:kubernetes-upgrade-517660 Clientid:01:52:54:00:1b:3b:ac}
	I0510 19:17:28.766000  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined IP address 192.168.72.244 and MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:17:28.766109  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHPort
	I0510 19:17:28.766221  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:3b:ac", ip: ""} in network mk-kubernetes-upgrade-517660: {Iface:virbr4 ExpiryTime:2025-05-10 20:16:26 +0000 UTC Type:0 Mac:52:54:00:1b:3b:ac Iaid: IPaddr:192.168.72.244 Prefix:24 Hostname:kubernetes-upgrade-517660 Clientid:01:52:54:00:1b:3b:ac}
	I0510 19:17:28.766244  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined IP address 192.168.72.244 and MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:17:28.766279  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHKeyPath
	I0510 19:17:28.766373  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHPort
	I0510 19:17:28.766499  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHUsername
	I0510 19:17:28.766526  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHKeyPath
	I0510 19:17:28.766625  441211 sshutil.go:53] new ssh client: &{IP:192.168.72.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/kubernetes-upgrade-517660/id_rsa Username:docker}
	I0510 19:17:28.766705  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetSSHUsername
	I0510 19:17:28.766822  441211 sshutil.go:53] new ssh client: &{IP:192.168.72.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/kubernetes-upgrade-517660/id_rsa Username:docker}
	I0510 19:17:28.876707  441211 ssh_runner.go:195] Run: systemctl --version
	I0510 19:17:28.884290  441211 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0510 19:17:29.068937  441211 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0510 19:17:29.078564  441211 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0510 19:17:29.078644  441211 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0510 19:17:29.096701  441211 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0510 19:17:29.096743  441211 start.go:495] detecting cgroup driver to use...
	I0510 19:17:29.096822  441211 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0510 19:17:29.129022  441211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0510 19:17:29.153843  441211 docker.go:225] disabling cri-docker service (if available) ...
	I0510 19:17:29.153925  441211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0510 19:17:29.179244  441211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0510 19:17:29.198123  441211 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0510 19:17:29.417673  441211 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0510 19:17:29.607287  441211 docker.go:241] disabling docker service ...
	I0510 19:17:29.607372  441211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0510 19:17:29.646512  441211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0510 19:17:29.664071  441211 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0510 19:17:29.882924  441211 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0510 19:17:30.106300  441211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0510 19:17:30.136177  441211 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0510 19:17:30.188893  441211 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0510 19:17:30.188965  441211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:17:30.210934  441211 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0510 19:17:30.211013  441211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:17:30.238866  441211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:17:30.268669  441211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:17:30.290216  441211 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0510 19:17:30.315065  441211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:17:30.338364  441211 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:17:30.356556  441211 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:17:30.389801  441211 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0510 19:17:30.432516  441211 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0510 19:17:30.456238  441211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 19:17:30.837311  441211 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0510 19:17:31.704355  441211 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0510 19:17:31.704442  441211 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0510 19:17:31.711376  441211 start.go:563] Will wait 60s for crictl version
	I0510 19:17:31.711455  441211 ssh_runner.go:195] Run: which crictl
	I0510 19:17:31.716735  441211 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0510 19:17:31.758817  441211 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0510 19:17:31.758931  441211 ssh_runner.go:195] Run: crio --version
	I0510 19:17:31.793740  441211 ssh_runner.go:195] Run: crio --version
	I0510 19:17:31.832143  441211 out.go:177] * Preparing Kubernetes v1.33.0 on CRI-O 1.29.1 ...
	I0510 19:17:27.637484  440855 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/proxy-client.crt ...
	I0510 19:17:27.637525  440855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/proxy-client.crt: {Name:mkf826f19db830ed1ead49d8e97e3c72543a8b9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:17:27.637734  440855 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/proxy-client.key ...
	I0510 19:17:27.637752  440855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/proxy-client.key: {Name:mk7f259d6244410f75414168dc45f3765a2a6614 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:17:27.637963  440855 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980.pem (1338 bytes)
	W0510 19:17:27.638017  440855 certs.go:480] ignoring /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980_empty.pem, impossibly tiny 0 bytes
	I0510 19:17:27.638034  440855 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem (1679 bytes)
	I0510 19:17:27.638071  440855 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem (1078 bytes)
	I0510 19:17:27.638105  440855 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem (1123 bytes)
	I0510 19:17:27.638141  440855 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem (1675 bytes)
	I0510 19:17:27.638195  440855 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem (1708 bytes)
	I0510 19:17:27.638831  440855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0510 19:17:27.677920  440855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0510 19:17:27.715112  440855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0510 19:17:27.751680  440855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0510 19:17:27.783278  440855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0510 19:17:27.813718  440855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0510 19:17:27.851486  440855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0510 19:17:27.889113  440855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0510 19:17:27.921830  440855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0510 19:17:27.952229  440855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980.pem --> /usr/share/ca-certificates/395980.pem (1338 bytes)
	I0510 19:17:27.983855  440855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem --> /usr/share/ca-certificates/3959802.pem (1708 bytes)
	I0510 19:17:28.018562  440855 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0510 19:17:28.040494  440855 ssh_runner.go:195] Run: openssl version
	I0510 19:17:28.047846  440855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0510 19:17:28.062139  440855 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0510 19:17:28.068231  440855 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 10 17:52 /usr/share/ca-certificates/minikubeCA.pem
	I0510 19:17:28.068301  440855 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0510 19:17:28.076106  440855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0510 19:17:28.090152  440855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/395980.pem && ln -fs /usr/share/ca-certificates/395980.pem /etc/ssl/certs/395980.pem"
	I0510 19:17:28.106510  440855 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/395980.pem
	I0510 19:17:28.112085  440855 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 10 18:00 /usr/share/ca-certificates/395980.pem
	I0510 19:17:28.112149  440855 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/395980.pem
	I0510 19:17:28.119571  440855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/395980.pem /etc/ssl/certs/51391683.0"
	I0510 19:17:28.133119  440855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3959802.pem && ln -fs /usr/share/ca-certificates/3959802.pem /etc/ssl/certs/3959802.pem"
	I0510 19:17:28.151406  440855 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3959802.pem
	I0510 19:17:28.157893  440855 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 10 18:00 /usr/share/ca-certificates/3959802.pem
	I0510 19:17:28.157972  440855 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3959802.pem
	I0510 19:17:28.166233  440855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3959802.pem /etc/ssl/certs/3ec20f2e.0"
	I0510 19:17:28.181724  440855 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0510 19:17:28.187563  440855 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0510 19:17:28.187647  440855 kubeadm.go:392] StartCluster: {Name:auto-380533 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 Clu
sterName:auto-380533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.68 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 19:17:28.187763  440855 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0510 19:17:28.187836  440855 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0510 19:17:28.246382  440855 cri.go:89] found id: ""
	I0510 19:17:28.246471  440855 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0510 19:17:28.259710  440855 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0510 19:17:28.271992  440855 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0510 19:17:28.287937  440855 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0510 19:17:28.287967  440855 kubeadm.go:157] found existing configuration files:
	
	I0510 19:17:28.288025  440855 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0510 19:17:28.300721  440855 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0510 19:17:28.300791  440855 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0510 19:17:28.313736  440855 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0510 19:17:28.325582  440855 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0510 19:17:28.325663  440855 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0510 19:17:28.340200  440855 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0510 19:17:28.352896  440855 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0510 19:17:28.353036  440855 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0510 19:17:28.365853  440855 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0510 19:17:28.378832  440855 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0510 19:17:28.378930  440855 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0510 19:17:28.392646  440855 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0510 19:17:28.568969  440855 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0510 19:17:31.156030  441585 main.go:141] libmachine: (kindnet-380533) waiting for IP...
	I0510 19:17:31.156996  441585 main.go:141] libmachine: (kindnet-380533) DBG | domain kindnet-380533 has defined MAC address 52:54:00:6d:01:25 in network mk-kindnet-380533
	I0510 19:17:31.157531  441585 main.go:141] libmachine: (kindnet-380533) DBG | unable to find current IP address of domain kindnet-380533 in network mk-kindnet-380533
	I0510 19:17:31.157610  441585 main.go:141] libmachine: (kindnet-380533) DBG | I0510 19:17:31.157564  441660 retry.go:31] will retry after 251.65008ms: waiting for domain to come up
	I0510 19:17:31.411459  441585 main.go:141] libmachine: (kindnet-380533) DBG | domain kindnet-380533 has defined MAC address 52:54:00:6d:01:25 in network mk-kindnet-380533
	I0510 19:17:31.412125  441585 main.go:141] libmachine: (kindnet-380533) DBG | unable to find current IP address of domain kindnet-380533 in network mk-kindnet-380533
	I0510 19:17:31.412153  441585 main.go:141] libmachine: (kindnet-380533) DBG | I0510 19:17:31.412060  441660 retry.go:31] will retry after 296.198311ms: waiting for domain to come up
	I0510 19:17:31.709763  441585 main.go:141] libmachine: (kindnet-380533) DBG | domain kindnet-380533 has defined MAC address 52:54:00:6d:01:25 in network mk-kindnet-380533
	I0510 19:17:31.710278  441585 main.go:141] libmachine: (kindnet-380533) DBG | unable to find current IP address of domain kindnet-380533 in network mk-kindnet-380533
	I0510 19:17:31.710311  441585 main.go:141] libmachine: (kindnet-380533) DBG | I0510 19:17:31.710257  441660 retry.go:31] will retry after 317.224923ms: waiting for domain to come up
	I0510 19:17:32.029040  441585 main.go:141] libmachine: (kindnet-380533) DBG | domain kindnet-380533 has defined MAC address 52:54:00:6d:01:25 in network mk-kindnet-380533
	I0510 19:17:32.029663  441585 main.go:141] libmachine: (kindnet-380533) DBG | unable to find current IP address of domain kindnet-380533 in network mk-kindnet-380533
	I0510 19:17:32.029707  441585 main.go:141] libmachine: (kindnet-380533) DBG | I0510 19:17:32.029623  441660 retry.go:31] will retry after 604.876098ms: waiting for domain to come up
	I0510 19:17:32.636190  441585 main.go:141] libmachine: (kindnet-380533) DBG | domain kindnet-380533 has defined MAC address 52:54:00:6d:01:25 in network mk-kindnet-380533
	I0510 19:17:32.636965  441585 main.go:141] libmachine: (kindnet-380533) DBG | unable to find current IP address of domain kindnet-380533 in network mk-kindnet-380533
	I0510 19:17:32.637007  441585 main.go:141] libmachine: (kindnet-380533) DBG | I0510 19:17:32.636946  441660 retry.go:31] will retry after 723.044382ms: waiting for domain to come up
	I0510 19:17:33.361531  441585 main.go:141] libmachine: (kindnet-380533) DBG | domain kindnet-380533 has defined MAC address 52:54:00:6d:01:25 in network mk-kindnet-380533
	I0510 19:17:33.362244  441585 main.go:141] libmachine: (kindnet-380533) DBG | unable to find current IP address of domain kindnet-380533 in network mk-kindnet-380533
	I0510 19:17:33.362274  441585 main.go:141] libmachine: (kindnet-380533) DBG | I0510 19:17:33.362195  441660 retry.go:31] will retry after 759.450565ms: waiting for domain to come up
	I0510 19:17:34.123820  441585 main.go:141] libmachine: (kindnet-380533) DBG | domain kindnet-380533 has defined MAC address 52:54:00:6d:01:25 in network mk-kindnet-380533
	I0510 19:17:34.124460  441585 main.go:141] libmachine: (kindnet-380533) DBG | unable to find current IP address of domain kindnet-380533 in network mk-kindnet-380533
	I0510 19:17:34.124486  441585 main.go:141] libmachine: (kindnet-380533) DBG | I0510 19:17:34.124359  441660 retry.go:31] will retry after 767.08013ms: waiting for domain to come up
	I0510 19:17:34.893614  441585 main.go:141] libmachine: (kindnet-380533) DBG | domain kindnet-380533 has defined MAC address 52:54:00:6d:01:25 in network mk-kindnet-380533
	I0510 19:17:34.894240  441585 main.go:141] libmachine: (kindnet-380533) DBG | unable to find current IP address of domain kindnet-380533 in network mk-kindnet-380533
	I0510 19:17:34.894414  441585 main.go:141] libmachine: (kindnet-380533) DBG | I0510 19:17:34.894356  441660 retry.go:31] will retry after 1.248097854s: waiting for domain to come up
	I0510 19:17:31.833592  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) Calling .GetIP
	I0510 19:17:31.838632  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:17:31.839003  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:3b:ac", ip: ""} in network mk-kubernetes-upgrade-517660: {Iface:virbr4 ExpiryTime:2025-05-10 20:16:26 +0000 UTC Type:0 Mac:52:54:00:1b:3b:ac Iaid: IPaddr:192.168.72.244 Prefix:24 Hostname:kubernetes-upgrade-517660 Clientid:01:52:54:00:1b:3b:ac}
	I0510 19:17:31.839032  441211 main.go:141] libmachine: (kubernetes-upgrade-517660) DBG | domain kubernetes-upgrade-517660 has defined IP address 192.168.72.244 and MAC address 52:54:00:1b:3b:ac in network mk-kubernetes-upgrade-517660
	I0510 19:17:31.839464  441211 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0510 19:17:31.844796  441211 kubeadm.go:875] updating cluster {Name:kubernetes-upgrade-517660 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.33.0 ClusterName:kubernetes-upgrade-517660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.244 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0510 19:17:31.844916  441211 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
	I0510 19:17:31.844981  441211 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 19:17:31.900281  441211 crio.go:514] all images are preloaded for cri-o runtime.
	I0510 19:17:31.900309  441211 crio.go:433] Images already preloaded, skipping extraction
	I0510 19:17:31.900375  441211 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 19:17:31.947091  441211 crio.go:514] all images are preloaded for cri-o runtime.
	I0510 19:17:31.947122  441211 cache_images.go:84] Images are preloaded, skipping loading
	I0510 19:17:31.947133  441211 kubeadm.go:926] updating node { 192.168.72.244 8443 v1.33.0 crio true true} ...
	I0510 19:17:31.947298  441211 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-517660 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.244
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.0 ClusterName:kubernetes-upgrade-517660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0510 19:17:31.947401  441211 ssh_runner.go:195] Run: crio config
	I0510 19:17:32.013205  441211 cni.go:84] Creating CNI manager for ""
	I0510 19:17:32.013243  441211 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0510 19:17:32.013254  441211 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0510 19:17:32.013282  441211 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.244 APIServerPort:8443 KubernetesVersion:v1.33.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-517660 NodeName:kubernetes-upgrade-517660 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.244"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.244 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0510 19:17:32.013447  441211 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.244
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-517660"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.244"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.244"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0510 19:17:32.013526  441211 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.0
	I0510 19:17:32.030265  441211 binaries.go:44] Found k8s binaries, skipping transfer
	I0510 19:17:32.030344  441211 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0510 19:17:32.046323  441211 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0510 19:17:32.074834  441211 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0510 19:17:32.105719  441211 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0510 19:17:32.133402  441211 ssh_runner.go:195] Run: grep 192.168.72.244	control-plane.minikube.internal$ /etc/hosts
	I0510 19:17:32.139411  441211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 19:17:32.399589  441211 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 19:17:32.495978  441211 certs.go:68] Setting up /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kubernetes-upgrade-517660 for IP: 192.168.72.244
	I0510 19:17:32.496015  441211 certs.go:194] generating shared ca certs ...
	I0510 19:17:32.496040  441211 certs.go:226] acquiring lock for ca certs: {Name:mk8db74782205da4ac57ef815dd495cda255251a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:17:32.496259  441211 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20720-388787/.minikube/ca.key
	I0510 19:17:32.496334  441211 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.key
	I0510 19:17:32.496356  441211 certs.go:256] generating profile certs ...
	I0510 19:17:32.496479  441211 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kubernetes-upgrade-517660/client.key
	I0510 19:17:32.496569  441211 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kubernetes-upgrade-517660/apiserver.key.dcec53ca
	I0510 19:17:32.496631  441211 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kubernetes-upgrade-517660/proxy-client.key
	I0510 19:17:32.496800  441211 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980.pem (1338 bytes)
	W0510 19:17:32.496852  441211 certs.go:480] ignoring /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980_empty.pem, impossibly tiny 0 bytes
	I0510 19:17:32.496868  441211 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem (1679 bytes)
	I0510 19:17:32.496916  441211 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem (1078 bytes)
	I0510 19:17:32.496956  441211 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem (1123 bytes)
	I0510 19:17:32.496987  441211 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem (1675 bytes)
	I0510 19:17:32.497046  441211 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem (1708 bytes)
	I0510 19:17:32.497977  441211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0510 19:17:32.560518  441211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0510 19:17:32.631319  441211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0510 19:17:32.726365  441211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0510 19:17:32.816023  441211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kubernetes-upgrade-517660/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0510 19:17:32.886976  441211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kubernetes-upgrade-517660/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0510 19:17:32.949620  441211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kubernetes-upgrade-517660/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0510 19:17:33.004897  441211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kubernetes-upgrade-517660/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0510 19:17:33.062898  441211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem --> /usr/share/ca-certificates/3959802.pem (1708 bytes)
	I0510 19:17:33.130436  441211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0510 19:17:33.191722  441211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980.pem --> /usr/share/ca-certificates/395980.pem (1338 bytes)
	I0510 19:17:33.285432  441211 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0510 19:17:33.350261  441211 ssh_runner.go:195] Run: openssl version
	I0510 19:17:33.362489  441211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3959802.pem && ln -fs /usr/share/ca-certificates/3959802.pem /etc/ssl/certs/3959802.pem"
	I0510 19:17:33.394089  441211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3959802.pem
	I0510 19:17:33.421995  441211 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 10 18:00 /usr/share/ca-certificates/3959802.pem
	I0510 19:17:33.422177  441211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3959802.pem
	I0510 19:17:33.445991  441211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3959802.pem /etc/ssl/certs/3ec20f2e.0"
	I0510 19:17:33.475773  441211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0510 19:17:33.506756  441211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0510 19:17:33.517580  441211 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 10 17:52 /usr/share/ca-certificates/minikubeCA.pem
	I0510 19:17:33.517667  441211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0510 19:17:33.529277  441211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0510 19:17:33.553748  441211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/395980.pem && ln -fs /usr/share/ca-certificates/395980.pem /etc/ssl/certs/395980.pem"
	I0510 19:17:33.581633  441211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/395980.pem
	I0510 19:17:33.594532  441211 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 10 18:00 /usr/share/ca-certificates/395980.pem
	I0510 19:17:33.594621  441211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/395980.pem
	I0510 19:17:33.610094  441211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/395980.pem /etc/ssl/certs/51391683.0"
	I0510 19:17:33.634033  441211 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0510 19:17:33.644574  441211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0510 19:17:33.661440  441211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0510 19:17:33.670756  441211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0510 19:17:33.679944  441211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0510 19:17:33.696184  441211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0510 19:17:33.705942  441211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0510 19:17:33.715953  441211 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-517660 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.33.0 ClusterName:kubernetes-upgrade-517660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.244 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 19:17:33.716069  441211 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0510 19:17:33.716163  441211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0510 19:17:33.778823  441211 cri.go:89] found id: "e8119b2fe57f81e89556d7d9218d982d94a1b67e795e6989c3697ef90ca78a65"
	I0510 19:17:33.778852  441211 cri.go:89] found id: "379c94172385a75f9bc43de92287177571991567ba71493cd9684aa5dc24ef98"
	I0510 19:17:33.778863  441211 cri.go:89] found id: "06b189768b498aa572e4c21b7573c73485cc45aa24a8f9717579cb8b865f57b4"
	I0510 19:17:33.778869  441211 cri.go:89] found id: "9e12348d055ddd0c870edd2ec224a30c06539be02eb8587a92cc2ea988b5f459"
	I0510 19:17:33.778873  441211 cri.go:89] found id: "38364fa35ca52c723ea72e5574033037a60b739b92de08e799eec1ecb315dd57"
	I0510 19:17:33.778878  441211 cri.go:89] found id: "976bee394737822abcf4d808d090ea0bbc2deb6e6be8b0de5866a9e0f43460f1"
	I0510 19:17:33.778882  441211 cri.go:89] found id: "2f72046512207c8c4a4b81792e84ea610196728613624a97ede56f58c1ce4d49"
	I0510 19:17:33.778886  441211 cri.go:89] found id: "14860b7d557a6f3d1f22336b73336339f00c6f5db92d1a947192890c3bff8952"
	I0510 19:17:33.778889  441211 cri.go:89] found id: "7e201c1f6e099e1ebf33a81c3991c619b069365cfb0b01c5607763218ba5fd28"
	I0510 19:17:33.778898  441211 cri.go:89] found id: "8cc7f1e51aec0c45954d2d1bf2e2e91e4c9b0f1d71a27c9dba085596abbf6c98"
	I0510 19:17:33.778903  441211 cri.go:89] found id: "d5c563de91038ffd3d7f232ed1696b3950f652018069db20ca3a1f0391011125"
	I0510 19:17:33.778907  441211 cri.go:89] found id: ""
	I0510 19:17:33.778960  441211 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-517660 -n kubernetes-upgrade-517660
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-517660 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-517660" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-517660
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-517660: (1.02368628s)
--- FAIL: TestKubernetesUpgrade (459.94s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (86.42s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-317241 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0510 19:13:48.489029  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-317241 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m19.451824405s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-317241] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20720
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20720-388787/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-388787/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-317241" primary control-plane node in "pause-317241" cluster
	* Updating the running kvm2 "pause-317241" VM ...
	* Preparing Kubernetes v1.33.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-317241" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0510 19:13:39.609756  438136 out.go:345] Setting OutFile to fd 1 ...
	I0510 19:13:39.610070  438136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 19:13:39.610092  438136 out.go:358] Setting ErrFile to fd 2...
	I0510 19:13:39.610101  438136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 19:13:39.610981  438136 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-388787/.minikube/bin
	I0510 19:13:39.611619  438136 out.go:352] Setting JSON to false
	I0510 19:13:39.612611  438136 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":32168,"bootTime":1746872252,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 19:13:39.612727  438136 start.go:140] virtualization: kvm guest
	I0510 19:13:39.614660  438136 out.go:177] * [pause-317241] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 19:13:39.616096  438136 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 19:13:39.616125  438136 notify.go:220] Checking for updates...
	I0510 19:13:39.618957  438136 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 19:13:39.620280  438136 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-388787/kubeconfig
	I0510 19:13:39.621495  438136 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-388787/.minikube
	I0510 19:13:39.623041  438136 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 19:13:39.624401  438136 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 19:13:39.625958  438136 config.go:182] Loaded profile config "pause-317241": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 19:13:39.626444  438136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:13:39.626534  438136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:13:39.643434  438136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33029
	I0510 19:13:39.643963  438136 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:13:39.644543  438136 main.go:141] libmachine: Using API Version  1
	I0510 19:13:39.644570  438136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:13:39.644945  438136 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:13:39.645129  438136 main.go:141] libmachine: (pause-317241) Calling .DriverName
	I0510 19:13:39.645401  438136 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 19:13:39.645710  438136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:13:39.645748  438136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:13:39.662003  438136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38459
	I0510 19:13:39.662472  438136 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:13:39.662961  438136 main.go:141] libmachine: Using API Version  1
	I0510 19:13:39.662990  438136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:13:39.663425  438136 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:13:39.663642  438136 main.go:141] libmachine: (pause-317241) Calling .DriverName
	I0510 19:13:39.700571  438136 out.go:177] * Using the kvm2 driver based on existing profile
	I0510 19:13:39.701983  438136 start.go:304] selected driver: kvm2
	I0510 19:13:39.702001  438136 start.go:908] validating driver "kvm2" against &{Name:pause-317241 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.33.0 ClusterName:pause-317241 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidi
a-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 19:13:39.702161  438136 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 19:13:39.702571  438136 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 19:13:39.702657  438136 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20720-388787/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0510 19:13:39.720182  438136 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0510 19:13:39.721233  438136 cni.go:84] Creating CNI manager for ""
	I0510 19:13:39.721311  438136 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0510 19:13:39.721399  438136 start.go:347] cluster config:
	{Name:pause-317241 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:pause-317241 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false
registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 19:13:39.721624  438136 iso.go:125] acquiring lock: {Name:mk19640015999219180c6685480547adf0c02201 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 19:13:39.724063  438136 out.go:177] * Starting "pause-317241" primary control-plane node in "pause-317241" cluster
	I0510 19:13:39.725254  438136 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
	I0510 19:13:39.725306  438136 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4
	I0510 19:13:39.725320  438136 cache.go:56] Caching tarball of preloaded images
	I0510 19:13:39.725421  438136 preload.go:172] Found /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0510 19:13:39.725433  438136 cache.go:59] Finished verifying existence of preloaded tar for v1.33.0 on crio
	I0510 19:13:39.725596  438136 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/pause-317241/config.json ...
	I0510 19:13:39.725850  438136 start.go:360] acquireMachinesLock for pause-317241: {Name:mk11499d7756d503a7a24339ad1a7f9ab9dc0fab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0510 19:14:24.445318  438136 start.go:364] duration metric: took 44.719433302s to acquireMachinesLock for "pause-317241"
	I0510 19:14:24.445371  438136 start.go:96] Skipping create...Using existing machine configuration
	I0510 19:14:24.445381  438136 fix.go:54] fixHost starting: 
	I0510 19:14:24.446009  438136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:14:24.446073  438136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:14:24.467842  438136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35775
	I0510 19:14:24.468397  438136 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:14:24.468979  438136 main.go:141] libmachine: Using API Version  1
	I0510 19:14:24.469007  438136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:14:24.469402  438136 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:14:24.469609  438136 main.go:141] libmachine: (pause-317241) Calling .DriverName
	I0510 19:14:24.469750  438136 main.go:141] libmachine: (pause-317241) Calling .GetState
	I0510 19:14:24.471832  438136 fix.go:112] recreateIfNeeded on pause-317241: state=Running err=<nil>
	W0510 19:14:24.471863  438136 fix.go:138] unexpected machine state, will restart: <nil>
	I0510 19:14:24.474004  438136 out.go:177] * Updating the running kvm2 "pause-317241" VM ...
	I0510 19:14:24.475369  438136 machine.go:93] provisionDockerMachine start ...
	I0510 19:14:24.475398  438136 main.go:141] libmachine: (pause-317241) Calling .DriverName
	I0510 19:14:24.475638  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHHostname
	I0510 19:14:24.478803  438136 main.go:141] libmachine: (pause-317241) DBG | domain pause-317241 has defined MAC address 52:54:00:f9:7f:ec in network mk-pause-317241
	I0510 19:14:24.479385  438136 main.go:141] libmachine: (pause-317241) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:7f:ec", ip: ""} in network mk-pause-317241: {Iface:virbr1 ExpiryTime:2025-05-10 20:12:25 +0000 UTC Type:0 Mac:52:54:00:f9:7f:ec Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:pause-317241 Clientid:01:52:54:00:f9:7f:ec}
	I0510 19:14:24.479412  438136 main.go:141] libmachine: (pause-317241) DBG | domain pause-317241 has defined IP address 192.168.39.10 and MAC address 52:54:00:f9:7f:ec in network mk-pause-317241
	I0510 19:14:24.479572  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHPort
	I0510 19:14:24.479810  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHKeyPath
	I0510 19:14:24.479978  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHKeyPath
	I0510 19:14:24.480159  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHUsername
	I0510 19:14:24.480393  438136 main.go:141] libmachine: Using SSH client type: native
	I0510 19:14:24.480729  438136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0510 19:14:24.480743  438136 main.go:141] libmachine: About to run SSH command:
	hostname
	I0510 19:14:24.590168  438136 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-317241
	
	I0510 19:14:24.590233  438136 main.go:141] libmachine: (pause-317241) Calling .GetMachineName
	I0510 19:14:24.590514  438136 buildroot.go:166] provisioning hostname "pause-317241"
	I0510 19:14:24.590543  438136 main.go:141] libmachine: (pause-317241) Calling .GetMachineName
	I0510 19:14:24.590744  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHHostname
	I0510 19:14:24.593552  438136 main.go:141] libmachine: (pause-317241) DBG | domain pause-317241 has defined MAC address 52:54:00:f9:7f:ec in network mk-pause-317241
	I0510 19:14:24.593999  438136 main.go:141] libmachine: (pause-317241) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:7f:ec", ip: ""} in network mk-pause-317241: {Iface:virbr1 ExpiryTime:2025-05-10 20:12:25 +0000 UTC Type:0 Mac:52:54:00:f9:7f:ec Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:pause-317241 Clientid:01:52:54:00:f9:7f:ec}
	I0510 19:14:24.594027  438136 main.go:141] libmachine: (pause-317241) DBG | domain pause-317241 has defined IP address 192.168.39.10 and MAC address 52:54:00:f9:7f:ec in network mk-pause-317241
	I0510 19:14:24.594232  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHPort
	I0510 19:14:24.594403  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHKeyPath
	I0510 19:14:24.594596  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHKeyPath
	I0510 19:14:24.594783  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHUsername
	I0510 19:14:24.594969  438136 main.go:141] libmachine: Using SSH client type: native
	I0510 19:14:24.595198  438136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0510 19:14:24.595216  438136 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-317241 && echo "pause-317241" | sudo tee /etc/hostname
	I0510 19:14:24.737602  438136 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-317241
	
	I0510 19:14:24.737663  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHHostname
	I0510 19:14:24.740579  438136 main.go:141] libmachine: (pause-317241) DBG | domain pause-317241 has defined MAC address 52:54:00:f9:7f:ec in network mk-pause-317241
	I0510 19:14:24.740870  438136 main.go:141] libmachine: (pause-317241) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:7f:ec", ip: ""} in network mk-pause-317241: {Iface:virbr1 ExpiryTime:2025-05-10 20:12:25 +0000 UTC Type:0 Mac:52:54:00:f9:7f:ec Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:pause-317241 Clientid:01:52:54:00:f9:7f:ec}
	I0510 19:14:24.740969  438136 main.go:141] libmachine: (pause-317241) DBG | domain pause-317241 has defined IP address 192.168.39.10 and MAC address 52:54:00:f9:7f:ec in network mk-pause-317241
	I0510 19:14:24.741111  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHPort
	I0510 19:14:24.741334  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHKeyPath
	I0510 19:14:24.741532  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHKeyPath
	I0510 19:14:24.741687  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHUsername
	I0510 19:14:24.741907  438136 main.go:141] libmachine: Using SSH client type: native
	I0510 19:14:24.742160  438136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0510 19:14:24.742180  438136 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-317241' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-317241/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-317241' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0510 19:14:24.853732  438136 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0510 19:14:24.853771  438136 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20720-388787/.minikube CaCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20720-388787/.minikube}
	I0510 19:14:24.853789  438136 buildroot.go:174] setting up certificates
	I0510 19:14:24.853801  438136 provision.go:84] configureAuth start
	I0510 19:14:24.853812  438136 main.go:141] libmachine: (pause-317241) Calling .GetMachineName
	I0510 19:14:24.854107  438136 main.go:141] libmachine: (pause-317241) Calling .GetIP
	I0510 19:14:24.856679  438136 main.go:141] libmachine: (pause-317241) DBG | domain pause-317241 has defined MAC address 52:54:00:f9:7f:ec in network mk-pause-317241
	I0510 19:14:24.857119  438136 main.go:141] libmachine: (pause-317241) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:7f:ec", ip: ""} in network mk-pause-317241: {Iface:virbr1 ExpiryTime:2025-05-10 20:12:25 +0000 UTC Type:0 Mac:52:54:00:f9:7f:ec Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:pause-317241 Clientid:01:52:54:00:f9:7f:ec}
	I0510 19:14:24.857153  438136 main.go:141] libmachine: (pause-317241) DBG | domain pause-317241 has defined IP address 192.168.39.10 and MAC address 52:54:00:f9:7f:ec in network mk-pause-317241
	I0510 19:14:24.857315  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHHostname
	I0510 19:14:24.859933  438136 main.go:141] libmachine: (pause-317241) DBG | domain pause-317241 has defined MAC address 52:54:00:f9:7f:ec in network mk-pause-317241
	I0510 19:14:24.860402  438136 main.go:141] libmachine: (pause-317241) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:7f:ec", ip: ""} in network mk-pause-317241: {Iface:virbr1 ExpiryTime:2025-05-10 20:12:25 +0000 UTC Type:0 Mac:52:54:00:f9:7f:ec Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:pause-317241 Clientid:01:52:54:00:f9:7f:ec}
	I0510 19:14:24.860447  438136 main.go:141] libmachine: (pause-317241) DBG | domain pause-317241 has defined IP address 192.168.39.10 and MAC address 52:54:00:f9:7f:ec in network mk-pause-317241
	I0510 19:14:24.860585  438136 provision.go:143] copyHostCerts
	I0510 19:14:24.860705  438136 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem, removing ...
	I0510 19:14:24.860726  438136 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem
	I0510 19:14:24.860796  438136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem (1078 bytes)
	I0510 19:14:24.860937  438136 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem, removing ...
	I0510 19:14:24.860949  438136 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem
	I0510 19:14:24.860984  438136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem (1123 bytes)
	I0510 19:14:24.861139  438136 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem, removing ...
	I0510 19:14:24.861159  438136 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem
	I0510 19:14:24.861192  438136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem (1675 bytes)
	I0510 19:14:24.861280  438136 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem org=jenkins.pause-317241 san=[127.0.0.1 192.168.39.10 localhost minikube pause-317241]
	I0510 19:14:25.111687  438136 provision.go:177] copyRemoteCerts
	I0510 19:14:25.111761  438136 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0510 19:14:25.111788  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHHostname
	I0510 19:14:25.114905  438136 main.go:141] libmachine: (pause-317241) DBG | domain pause-317241 has defined MAC address 52:54:00:f9:7f:ec in network mk-pause-317241
	I0510 19:14:25.115275  438136 main.go:141] libmachine: (pause-317241) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:7f:ec", ip: ""} in network mk-pause-317241: {Iface:virbr1 ExpiryTime:2025-05-10 20:12:25 +0000 UTC Type:0 Mac:52:54:00:f9:7f:ec Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:pause-317241 Clientid:01:52:54:00:f9:7f:ec}
	I0510 19:14:25.115307  438136 main.go:141] libmachine: (pause-317241) DBG | domain pause-317241 has defined IP address 192.168.39.10 and MAC address 52:54:00:f9:7f:ec in network mk-pause-317241
	I0510 19:14:25.115463  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHPort
	I0510 19:14:25.115686  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHKeyPath
	I0510 19:14:25.115877  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHUsername
	I0510 19:14:25.116030  438136 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/pause-317241/id_rsa Username:docker}
	I0510 19:14:25.200701  438136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0510 19:14:25.235121  438136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0510 19:14:25.267261  438136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0510 19:14:25.316334  438136 provision.go:87] duration metric: took 462.519385ms to configureAuth
	I0510 19:14:25.316366  438136 buildroot.go:189] setting minikube options for container-runtime
	I0510 19:14:25.316612  438136 config.go:182] Loaded profile config "pause-317241": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 19:14:25.316704  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHHostname
	I0510 19:14:25.319706  438136 main.go:141] libmachine: (pause-317241) DBG | domain pause-317241 has defined MAC address 52:54:00:f9:7f:ec in network mk-pause-317241
	I0510 19:14:25.320092  438136 main.go:141] libmachine: (pause-317241) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:7f:ec", ip: ""} in network mk-pause-317241: {Iface:virbr1 ExpiryTime:2025-05-10 20:12:25 +0000 UTC Type:0 Mac:52:54:00:f9:7f:ec Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:pause-317241 Clientid:01:52:54:00:f9:7f:ec}
	I0510 19:14:25.320139  438136 main.go:141] libmachine: (pause-317241) DBG | domain pause-317241 has defined IP address 192.168.39.10 and MAC address 52:54:00:f9:7f:ec in network mk-pause-317241
	I0510 19:14:25.320347  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHPort
	I0510 19:14:25.320577  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHKeyPath
	I0510 19:14:25.320786  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHKeyPath
	I0510 19:14:25.320957  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHUsername
	I0510 19:14:25.321133  438136 main.go:141] libmachine: Using SSH client type: native
	I0510 19:14:25.321428  438136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0510 19:14:25.321449  438136 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0510 19:14:31.523392  438136 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0510 19:14:31.523427  438136 machine.go:96] duration metric: took 7.048038737s to provisionDockerMachine
	I0510 19:14:31.523444  438136 start.go:293] postStartSetup for "pause-317241" (driver="kvm2")
	I0510 19:14:31.523458  438136 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0510 19:14:31.523483  438136 main.go:141] libmachine: (pause-317241) Calling .DriverName
	I0510 19:14:31.523869  438136 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0510 19:14:31.523906  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHHostname
	I0510 19:14:31.527651  438136 main.go:141] libmachine: (pause-317241) DBG | domain pause-317241 has defined MAC address 52:54:00:f9:7f:ec in network mk-pause-317241
	I0510 19:14:31.528095  438136 main.go:141] libmachine: (pause-317241) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:7f:ec", ip: ""} in network mk-pause-317241: {Iface:virbr1 ExpiryTime:2025-05-10 20:12:25 +0000 UTC Type:0 Mac:52:54:00:f9:7f:ec Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:pause-317241 Clientid:01:52:54:00:f9:7f:ec}
	I0510 19:14:31.528129  438136 main.go:141] libmachine: (pause-317241) DBG | domain pause-317241 has defined IP address 192.168.39.10 and MAC address 52:54:00:f9:7f:ec in network mk-pause-317241
	I0510 19:14:31.528308  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHPort
	I0510 19:14:31.528533  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHKeyPath
	I0510 19:14:31.528764  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHUsername
	I0510 19:14:31.528975  438136 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/pause-317241/id_rsa Username:docker}
	I0510 19:14:31.617511  438136 ssh_runner.go:195] Run: cat /etc/os-release
	I0510 19:14:31.622610  438136 info.go:137] Remote host: Buildroot 2024.11.2
	I0510 19:14:31.622645  438136 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-388787/.minikube/addons for local assets ...
	I0510 19:14:31.622714  438136 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-388787/.minikube/files for local assets ...
	I0510 19:14:31.622815  438136 filesync.go:149] local asset: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem -> 3959802.pem in /etc/ssl/certs
	I0510 19:14:31.622924  438136 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0510 19:14:31.638457  438136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem --> /etc/ssl/certs/3959802.pem (1708 bytes)
	I0510 19:14:31.673093  438136 start.go:296] duration metric: took 149.629733ms for postStartSetup
	I0510 19:14:31.673147  438136 fix.go:56] duration metric: took 7.227765098s for fixHost
	I0510 19:14:31.673174  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHHostname
	I0510 19:14:31.676425  438136 main.go:141] libmachine: (pause-317241) DBG | domain pause-317241 has defined MAC address 52:54:00:f9:7f:ec in network mk-pause-317241
	I0510 19:14:31.676900  438136 main.go:141] libmachine: (pause-317241) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:7f:ec", ip: ""} in network mk-pause-317241: {Iface:virbr1 ExpiryTime:2025-05-10 20:12:25 +0000 UTC Type:0 Mac:52:54:00:f9:7f:ec Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:pause-317241 Clientid:01:52:54:00:f9:7f:ec}
	I0510 19:14:31.676945  438136 main.go:141] libmachine: (pause-317241) DBG | domain pause-317241 has defined IP address 192.168.39.10 and MAC address 52:54:00:f9:7f:ec in network mk-pause-317241
	I0510 19:14:31.677121  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHPort
	I0510 19:14:31.677347  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHKeyPath
	I0510 19:14:31.677542  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHKeyPath
	I0510 19:14:31.677694  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHUsername
	I0510 19:14:31.677895  438136 main.go:141] libmachine: Using SSH client type: native
	I0510 19:14:31.678166  438136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0510 19:14:31.678179  438136 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0510 19:14:31.785007  438136 main.go:141] libmachine: SSH cmd err, output: <nil>: 1746904471.780515143
	
	I0510 19:14:31.785035  438136 fix.go:216] guest clock: 1746904471.780515143
	I0510 19:14:31.785043  438136 fix.go:229] Guest: 2025-05-10 19:14:31.780515143 +0000 UTC Remote: 2025-05-10 19:14:31.673152245 +0000 UTC m=+52.106858273 (delta=107.362898ms)
	I0510 19:14:31.785071  438136 fix.go:200] guest clock delta is within tolerance: 107.362898ms
	I0510 19:14:31.785078  438136 start.go:83] releasing machines lock for "pause-317241", held for 7.339733355s
	I0510 19:14:31.785109  438136 main.go:141] libmachine: (pause-317241) Calling .DriverName
	I0510 19:14:31.785386  438136 main.go:141] libmachine: (pause-317241) Calling .GetIP
	I0510 19:14:31.788440  438136 main.go:141] libmachine: (pause-317241) DBG | domain pause-317241 has defined MAC address 52:54:00:f9:7f:ec in network mk-pause-317241
	I0510 19:14:31.788887  438136 main.go:141] libmachine: (pause-317241) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:7f:ec", ip: ""} in network mk-pause-317241: {Iface:virbr1 ExpiryTime:2025-05-10 20:12:25 +0000 UTC Type:0 Mac:52:54:00:f9:7f:ec Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:pause-317241 Clientid:01:52:54:00:f9:7f:ec}
	I0510 19:14:31.788916  438136 main.go:141] libmachine: (pause-317241) DBG | domain pause-317241 has defined IP address 192.168.39.10 and MAC address 52:54:00:f9:7f:ec in network mk-pause-317241
	I0510 19:14:31.789113  438136 main.go:141] libmachine: (pause-317241) Calling .DriverName
	I0510 19:14:31.789814  438136 main.go:141] libmachine: (pause-317241) Calling .DriverName
	I0510 19:14:31.790028  438136 main.go:141] libmachine: (pause-317241) Calling .DriverName
	I0510 19:14:31.790149  438136 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0510 19:14:31.790221  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHHostname
	I0510 19:14:31.790287  438136 ssh_runner.go:195] Run: cat /version.json
	I0510 19:14:31.790320  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHHostname
	I0510 19:14:31.793256  438136 main.go:141] libmachine: (pause-317241) DBG | domain pause-317241 has defined MAC address 52:54:00:f9:7f:ec in network mk-pause-317241
	I0510 19:14:31.793548  438136 main.go:141] libmachine: (pause-317241) DBG | domain pause-317241 has defined MAC address 52:54:00:f9:7f:ec in network mk-pause-317241
	I0510 19:14:31.793682  438136 main.go:141] libmachine: (pause-317241) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:7f:ec", ip: ""} in network mk-pause-317241: {Iface:virbr1 ExpiryTime:2025-05-10 20:12:25 +0000 UTC Type:0 Mac:52:54:00:f9:7f:ec Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:pause-317241 Clientid:01:52:54:00:f9:7f:ec}
	I0510 19:14:31.793716  438136 main.go:141] libmachine: (pause-317241) DBG | domain pause-317241 has defined IP address 192.168.39.10 and MAC address 52:54:00:f9:7f:ec in network mk-pause-317241
	I0510 19:14:31.793899  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHPort
	I0510 19:14:31.794000  438136 main.go:141] libmachine: (pause-317241) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:7f:ec", ip: ""} in network mk-pause-317241: {Iface:virbr1 ExpiryTime:2025-05-10 20:12:25 +0000 UTC Type:0 Mac:52:54:00:f9:7f:ec Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:pause-317241 Clientid:01:52:54:00:f9:7f:ec}
	I0510 19:14:31.794064  438136 main.go:141] libmachine: (pause-317241) DBG | domain pause-317241 has defined IP address 192.168.39.10 and MAC address 52:54:00:f9:7f:ec in network mk-pause-317241
	I0510 19:14:31.794091  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHKeyPath
	I0510 19:14:31.794285  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHUsername
	I0510 19:14:31.794317  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHPort
	I0510 19:14:31.794495  438136 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/pause-317241/id_rsa Username:docker}
	I0510 19:14:31.794567  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHKeyPath
	I0510 19:14:31.794746  438136 main.go:141] libmachine: (pause-317241) Calling .GetSSHUsername
	I0510 19:14:31.794908  438136 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/pause-317241/id_rsa Username:docker}
	I0510 19:14:31.923144  438136 ssh_runner.go:195] Run: systemctl --version
	I0510 19:14:31.930628  438136 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0510 19:14:32.104072  438136 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0510 19:14:32.111144  438136 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0510 19:14:32.111272  438136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0510 19:14:32.123893  438136 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0510 19:14:32.123930  438136 start.go:495] detecting cgroup driver to use...
	I0510 19:14:32.124018  438136 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0510 19:14:32.145944  438136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0510 19:14:32.165257  438136 docker.go:225] disabling cri-docker service (if available) ...
	I0510 19:14:32.165337  438136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0510 19:14:32.182945  438136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0510 19:14:32.200921  438136 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0510 19:14:32.406434  438136 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0510 19:14:32.608541  438136 docker.go:241] disabling docker service ...
	I0510 19:14:32.608651  438136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0510 19:14:32.641030  438136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0510 19:14:32.658936  438136 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0510 19:14:32.862494  438136 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0510 19:14:33.053672  438136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0510 19:14:33.077180  438136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0510 19:14:33.106347  438136 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0510 19:14:33.106424  438136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:14:33.119688  438136 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0510 19:14:33.119762  438136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:14:33.132756  438136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:14:33.145209  438136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:14:33.161597  438136 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0510 19:14:33.175568  438136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:14:33.192586  438136 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:14:33.211506  438136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:14:33.227430  438136 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0510 19:14:33.240968  438136 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0510 19:14:33.253413  438136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 19:14:33.508478  438136 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0510 19:14:33.965371  438136 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0510 19:14:33.965456  438136 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0510 19:14:33.972880  438136 start.go:563] Will wait 60s for crictl version
	I0510 19:14:33.972941  438136 ssh_runner.go:195] Run: which crictl
	I0510 19:14:33.979838  438136 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0510 19:14:34.033940  438136 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0510 19:14:34.034039  438136 ssh_runner.go:195] Run: crio --version
	I0510 19:14:34.069617  438136 ssh_runner.go:195] Run: crio --version
	I0510 19:14:34.117498  438136 out.go:177] * Preparing Kubernetes v1.33.0 on CRI-O 1.29.1 ...
	I0510 19:14:34.118988  438136 main.go:141] libmachine: (pause-317241) Calling .GetIP
	I0510 19:14:34.122386  438136 main.go:141] libmachine: (pause-317241) DBG | domain pause-317241 has defined MAC address 52:54:00:f9:7f:ec in network mk-pause-317241
	I0510 19:14:34.122797  438136 main.go:141] libmachine: (pause-317241) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:7f:ec", ip: ""} in network mk-pause-317241: {Iface:virbr1 ExpiryTime:2025-05-10 20:12:25 +0000 UTC Type:0 Mac:52:54:00:f9:7f:ec Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:pause-317241 Clientid:01:52:54:00:f9:7f:ec}
	I0510 19:14:34.122827  438136 main.go:141] libmachine: (pause-317241) DBG | domain pause-317241 has defined IP address 192.168.39.10 and MAC address 52:54:00:f9:7f:ec in network mk-pause-317241
	I0510 19:14:34.123134  438136 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0510 19:14:34.128623  438136 kubeadm.go:875] updating cluster {Name:pause-317241 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0
ClusterName:pause-317241 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-pl
ugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0510 19:14:34.128789  438136 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
	I0510 19:14:34.128854  438136 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 19:14:34.188240  438136 crio.go:514] all images are preloaded for cri-o runtime.
	I0510 19:14:34.188271  438136 crio.go:433] Images already preloaded, skipping extraction
	I0510 19:14:34.188338  438136 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 19:14:34.241152  438136 crio.go:514] all images are preloaded for cri-o runtime.
	I0510 19:14:34.241183  438136 cache_images.go:84] Images are preloaded, skipping loading
	I0510 19:14:34.241192  438136 kubeadm.go:926] updating node { 192.168.39.10 8443 v1.33.0 crio true true} ...
	I0510 19:14:34.241318  438136 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-317241 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.0 ClusterName:pause-317241 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0510 19:14:34.241401  438136 ssh_runner.go:195] Run: crio config
	I0510 19:14:34.305305  438136 cni.go:84] Creating CNI manager for ""
	I0510 19:14:34.305337  438136 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0510 19:14:34.305350  438136 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0510 19:14:34.305382  438136 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.10 APIServerPort:8443 KubernetesVersion:v1.33.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-317241 NodeName:pause-317241 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0510 19:14:34.305609  438136 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-317241"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.10"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.10"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0510 19:14:34.305711  438136 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.0
	I0510 19:14:34.319654  438136 binaries.go:44] Found k8s binaries, skipping transfer
	I0510 19:14:34.319744  438136 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0510 19:14:34.333001  438136 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0510 19:14:34.359301  438136 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0510 19:14:34.387202  438136 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I0510 19:14:34.415983  438136 ssh_runner.go:195] Run: grep 192.168.39.10	control-plane.minikube.internal$ /etc/hosts
	I0510 19:14:34.421204  438136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 19:14:34.658086  438136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 19:14:34.726369  438136 certs.go:68] Setting up /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/pause-317241 for IP: 192.168.39.10
	I0510 19:14:34.726398  438136 certs.go:194] generating shared ca certs ...
	I0510 19:14:34.726427  438136 certs.go:226] acquiring lock for ca certs: {Name:mk8db74782205da4ac57ef815dd495cda255251a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:14:34.726652  438136 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20720-388787/.minikube/ca.key
	I0510 19:14:34.726789  438136 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.key
	I0510 19:14:34.726809  438136 certs.go:256] generating profile certs ...
	I0510 19:14:34.726938  438136 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/pause-317241/client.key
	I0510 19:14:34.727052  438136 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/pause-317241/apiserver.key.234f754e
	I0510 19:14:34.727112  438136 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/pause-317241/proxy-client.key
	I0510 19:14:34.727297  438136 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980.pem (1338 bytes)
	W0510 19:14:34.727348  438136 certs.go:480] ignoring /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980_empty.pem, impossibly tiny 0 bytes
	I0510 19:14:34.727362  438136 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem (1679 bytes)
	I0510 19:14:34.727402  438136 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem (1078 bytes)
	I0510 19:14:34.727438  438136 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem (1123 bytes)
	I0510 19:14:34.727477  438136 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem (1675 bytes)
	I0510 19:14:34.727548  438136 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem (1708 bytes)
	I0510 19:14:34.728380  438136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0510 19:14:34.784555  438136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0510 19:14:34.835531  438136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0510 19:14:34.875423  438136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0510 19:14:34.911888  438136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/pause-317241/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0510 19:14:34.945714  438136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/pause-317241/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0510 19:14:34.987193  438136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/pause-317241/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0510 19:14:35.021080  438136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/pause-317241/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0510 19:14:35.054368  438136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980.pem --> /usr/share/ca-certificates/395980.pem (1338 bytes)
	I0510 19:14:35.088161  438136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem --> /usr/share/ca-certificates/3959802.pem (1708 bytes)
	I0510 19:14:35.128092  438136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0510 19:14:35.171137  438136 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0510 19:14:35.193543  438136 ssh_runner.go:195] Run: openssl version
	I0510 19:14:35.200701  438136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3959802.pem && ln -fs /usr/share/ca-certificates/3959802.pem /etc/ssl/certs/3959802.pem"
	I0510 19:14:35.218089  438136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3959802.pem
	I0510 19:14:35.224633  438136 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 10 18:00 /usr/share/ca-certificates/3959802.pem
	I0510 19:14:35.224716  438136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3959802.pem
	I0510 19:14:35.234208  438136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3959802.pem /etc/ssl/certs/3ec20f2e.0"
	I0510 19:14:35.248186  438136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0510 19:14:35.264454  438136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0510 19:14:35.270602  438136 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 10 17:52 /usr/share/ca-certificates/minikubeCA.pem
	I0510 19:14:35.270685  438136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0510 19:14:35.279160  438136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0510 19:14:35.295448  438136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/395980.pem && ln -fs /usr/share/ca-certificates/395980.pem /etc/ssl/certs/395980.pem"
	I0510 19:14:35.313737  438136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/395980.pem
	I0510 19:14:35.319871  438136 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 10 18:00 /usr/share/ca-certificates/395980.pem
	I0510 19:14:35.319974  438136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/395980.pem
	I0510 19:14:35.331440  438136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/395980.pem /etc/ssl/certs/51391683.0"
	I0510 19:14:35.346654  438136 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0510 19:14:35.352386  438136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0510 19:14:35.359806  438136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0510 19:14:35.367559  438136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0510 19:14:35.375758  438136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0510 19:14:35.387197  438136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0510 19:14:35.395650  438136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0510 19:14:35.403890  438136 kubeadm.go:392] StartCluster: {Name:pause-317241 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 Cl
usterName:pause-317241 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugi
n:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 19:14:35.404050  438136 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0510 19:14:35.404110  438136 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0510 19:14:35.454579  438136 cri.go:89] found id: "60d3f14ff709f7784986c33fb2487c2f5a652445cbba518e696441f701e452fb"
	I0510 19:14:35.454611  438136 cri.go:89] found id: "c9945e09f6b6d261039029f89e168d45dd9fc4acf65b417b637a7704d3cc6df5"
	I0510 19:14:35.454617  438136 cri.go:89] found id: "b96ba0c6867681eb5c3dd0df167dc56dd09ffcb675f8fa26472566e54feb7385"
	I0510 19:14:35.454624  438136 cri.go:89] found id: "6989a7e3ea042c054e6f979c8042e6a4f7c82fab32f4778857b936239f6db91c"
	I0510 19:14:35.454629  438136 cri.go:89] found id: "b84f77943081f73cd80a1376987cceac5bbcb6932aaab74ffc59f9400d903650"
	I0510 19:14:35.454635  438136 cri.go:89] found id: "5ac36579b810dd23a78153742e60a40498c4c2744c1c1b600d92974993419a57"
	I0510 19:14:35.454640  438136 cri.go:89] found id: ""
	I0510 19:14:35.454705  438136 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-317241 -n pause-317241
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-317241 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-317241 logs -n 25: (1.867128438s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-380533 sudo              | cilium-380533             | jenkins | v1.35.0 | 10 May 25 19:10 UTC |                     |
	|         | containerd config dump             |                           |         |         |                     |                     |
	| ssh     | -p cilium-380533 sudo              | cilium-380533             | jenkins | v1.35.0 | 10 May 25 19:10 UTC |                     |
	|         | systemctl status crio --all        |                           |         |         |                     |                     |
	|         | --full --no-pager                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-380533 sudo              | cilium-380533             | jenkins | v1.35.0 | 10 May 25 19:10 UTC |                     |
	|         | systemctl cat crio --no-pager      |                           |         |         |                     |                     |
	| ssh     | -p cilium-380533 sudo find         | cilium-380533             | jenkins | v1.35.0 | 10 May 25 19:10 UTC |                     |
	|         | /etc/crio -type f -exec sh -c      |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;               |                           |         |         |                     |                     |
	| ssh     | -p cilium-380533 sudo crio         | cilium-380533             | jenkins | v1.35.0 | 10 May 25 19:10 UTC |                     |
	|         | config                             |                           |         |         |                     |                     |
	| delete  | -p cilium-380533                   | cilium-380533             | jenkins | v1.35.0 | 10 May 25 19:10 UTC | 10 May 25 19:10 UTC |
	| start   | -p kubernetes-upgrade-517660       | kubernetes-upgrade-517660 | jenkins | v1.35.0 | 10 May 25 19:10 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0       |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-065180             | NoKubernetes-065180       | jenkins | v1.35.0 | 10 May 25 19:11 UTC | 10 May 25 19:12 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p offline-crio-031624             | offline-crio-031624       | jenkins | v1.35.0 | 10 May 25 19:11 UTC | 10 May 25 19:11 UTC |
	| start   | -p pause-317241 --memory=2048      | pause-317241              | jenkins | v1.35.0 | 10 May 25 19:11 UTC | 10 May 25 19:13 UTC |
	|         | --install-addons=false             |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p running-upgrade-085041          | running-upgrade-085041    | jenkins | v1.35.0 | 10 May 25 19:12 UTC | 10 May 25 19:13 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-065180             | NoKubernetes-065180       | jenkins | v1.35.0 | 10 May 25 19:12 UTC | 10 May 25 19:12 UTC |
	| start   | -p NoKubernetes-065180             | NoKubernetes-065180       | jenkins | v1.35.0 | 10 May 25 19:12 UTC | 10 May 25 19:13 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-065180 sudo        | NoKubernetes-065180       | jenkins | v1.35.0 | 10 May 25 19:13 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-065180             | NoKubernetes-065180       | jenkins | v1.35.0 | 10 May 25 19:13 UTC | 10 May 25 19:13 UTC |
	| delete  | -p running-upgrade-085041          | running-upgrade-085041    | jenkins | v1.35.0 | 10 May 25 19:13 UTC | 10 May 25 19:13 UTC |
	| start   | -p NoKubernetes-065180             | NoKubernetes-065180       | jenkins | v1.35.0 | 10 May 25 19:13 UTC | 10 May 25 19:14 UTC |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p force-systemd-flag-525854       | force-systemd-flag-525854 | jenkins | v1.35.0 | 10 May 25 19:13 UTC | 10 May 25 19:14 UTC |
	|         | --memory=2048 --force-systemd      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p pause-317241                    | pause-317241              | jenkins | v1.35.0 | 10 May 25 19:13 UTC | 10 May 25 19:14 UTC |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-065180 sudo        | NoKubernetes-065180       | jenkins | v1.35.0 | 10 May 25 19:14 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-065180             | NoKubernetes-065180       | jenkins | v1.35.0 | 10 May 25 19:14 UTC | 10 May 25 19:14 UTC |
	| start   | -p force-systemd-env-429136        | force-systemd-env-429136  | jenkins | v1.35.0 | 10 May 25 19:14 UTC |                     |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-525854 ssh cat  | force-systemd-flag-525854 | jenkins | v1.35.0 | 10 May 25 19:14 UTC | 10 May 25 19:14 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-525854       | force-systemd-flag-525854 | jenkins | v1.35.0 | 10 May 25 19:14 UTC | 10 May 25 19:14 UTC |
	| start   | -p cert-expiration-355262          | cert-expiration-355262    | jenkins | v1.35.0 | 10 May 25 19:14 UTC |                     |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --cert-expiration=3m               |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/05/10 19:14:48
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0510 19:14:48.172142  438986 out.go:345] Setting OutFile to fd 1 ...
	I0510 19:14:48.172255  438986 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 19:14:48.172259  438986 out.go:358] Setting ErrFile to fd 2...
	I0510 19:14:48.172262  438986 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 19:14:48.172450  438986 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-388787/.minikube/bin
	I0510 19:14:48.173059  438986 out.go:352] Setting JSON to false
	I0510 19:14:48.174106  438986 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":32236,"bootTime":1746872252,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 19:14:48.174167  438986 start.go:140] virtualization: kvm guest
	I0510 19:14:48.177051  438986 out.go:177] * [cert-expiration-355262] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 19:14:48.178701  438986 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 19:14:48.178717  438986 notify.go:220] Checking for updates...
	I0510 19:14:48.181638  438986 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 19:14:48.183032  438986 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-388787/kubeconfig
	I0510 19:14:48.184383  438986 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-388787/.minikube
	I0510 19:14:48.185734  438986 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 19:14:48.187336  438986 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 19:14:48.189574  438986 config.go:182] Loaded profile config "force-systemd-env-429136": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 19:14:48.189716  438986 config.go:182] Loaded profile config "kubernetes-upgrade-517660": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0510 19:14:48.189893  438986 config.go:182] Loaded profile config "pause-317241": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 19:14:48.190033  438986 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 19:14:48.228302  438986 out.go:177] * Using the kvm2 driver based on user configuration
	I0510 19:14:48.229715  438986 start.go:304] selected driver: kvm2
	I0510 19:14:48.229735  438986 start.go:908] validating driver "kvm2" against <nil>
	I0510 19:14:48.229748  438986 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 19:14:48.230464  438986 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 19:14:48.230547  438986 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20720-388787/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0510 19:14:48.246819  438986 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0510 19:14:48.246879  438986 start_flags.go:311] no existing cluster config was found, will generate one from the flags 
	I0510 19:14:48.247120  438986 start_flags.go:957] Wait components to verify : map[apiserver:true system_pods:true]
	I0510 19:14:48.247139  438986 cni.go:84] Creating CNI manager for ""
	I0510 19:14:48.247180  438986 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0510 19:14:48.247187  438986 start_flags.go:320] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0510 19:14:48.247257  438986 start.go:347] cluster config:
	{Name:cert-expiration-355262 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:cert-expiration-355262 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 19:14:48.247361  438986 iso.go:125] acquiring lock: {Name:mk19640015999219180c6685480547adf0c02201 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 19:14:48.249551  438986 out.go:177] * Starting "cert-expiration-355262" primary control-plane node in "cert-expiration-355262" cluster
	I0510 19:14:44.775720  438136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 19:14:44.808812  438136 node_ready.go:35] waiting up to 6m0s for node "pause-317241" to be "Ready" ...
	I0510 19:14:44.817862  438136 node_ready.go:49] node "pause-317241" is "Ready"
	I0510 19:14:44.817900  438136 node_ready.go:38] duration metric: took 9.042599ms for node "pause-317241" to be "Ready" ...
	I0510 19:14:44.817912  438136 api_server.go:52] waiting for apiserver process to appear ...
	I0510 19:14:44.817970  438136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:14:44.847393  438136 api_server.go:72] duration metric: took 442.45631ms to wait for apiserver process to appear ...
	I0510 19:14:44.847424  438136 api_server.go:88] waiting for apiserver healthz status ...
	I0510 19:14:44.847444  438136 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0510 19:14:44.858555  438136 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I0510 19:14:44.859684  438136 api_server.go:141] control plane version: v1.33.0
	I0510 19:14:44.859707  438136 api_server.go:131] duration metric: took 12.277046ms to wait for apiserver health ...
	I0510 19:14:44.859715  438136 system_pods.go:43] waiting for kube-system pods to appear ...
	I0510 19:14:44.862773  438136 system_pods.go:59] 6 kube-system pods found
	I0510 19:14:44.862817  438136 system_pods.go:61] "coredns-674b8bbfcf-2cc2n" [c1ecbdbb-8d9b-4ecf-a9a2-94d3478e1128] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0510 19:14:44.862830  438136 system_pods.go:61] "etcd-pause-317241" [139a211d-954b-48b1-9d06-04930cbae3ef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0510 19:14:44.862844  438136 system_pods.go:61] "kube-apiserver-pause-317241" [6df56230-68a6-49e8-8fa0-9de8dcea547a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0510 19:14:44.862858  438136 system_pods.go:61] "kube-controller-manager-pause-317241" [90a731bf-4486-4c65-b1b0-b502df8db86f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0510 19:14:44.862865  438136 system_pods.go:61] "kube-proxy-skvbp" [08543e5b-1085-4de5-9922-16d2a027fb0e] Running
	I0510 19:14:44.862883  438136 system_pods.go:61] "kube-scheduler-pause-317241" [f0725bb2-7a49-4852-a9aa-8f03137243a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0510 19:14:44.862896  438136 system_pods.go:74] duration metric: took 3.17322ms to wait for pod list to return data ...
	I0510 19:14:44.862914  438136 default_sa.go:34] waiting for default service account to be created ...
	I0510 19:14:44.871600  438136 default_sa.go:45] found service account: "default"
	I0510 19:14:44.871633  438136 default_sa.go:55] duration metric: took 8.710698ms for default service account to be created ...
	I0510 19:14:44.871647  438136 system_pods.go:116] waiting for k8s-apps to be running ...
	I0510 19:14:44.876046  438136 system_pods.go:86] 6 kube-system pods found
	I0510 19:14:44.876093  438136 system_pods.go:89] "coredns-674b8bbfcf-2cc2n" [c1ecbdbb-8d9b-4ecf-a9a2-94d3478e1128] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0510 19:14:44.876114  438136 system_pods.go:89] "etcd-pause-317241" [139a211d-954b-48b1-9d06-04930cbae3ef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0510 19:14:44.876127  438136 system_pods.go:89] "kube-apiserver-pause-317241" [6df56230-68a6-49e8-8fa0-9de8dcea547a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0510 19:14:44.876138  438136 system_pods.go:89] "kube-controller-manager-pause-317241" [90a731bf-4486-4c65-b1b0-b502df8db86f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0510 19:14:44.876145  438136 system_pods.go:89] "kube-proxy-skvbp" [08543e5b-1085-4de5-9922-16d2a027fb0e] Running
	I0510 19:14:44.876156  438136 system_pods.go:89] "kube-scheduler-pause-317241" [f0725bb2-7a49-4852-a9aa-8f03137243a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0510 19:14:44.876172  438136 system_pods.go:126] duration metric: took 4.517064ms to wait for k8s-apps to be running ...
	I0510 19:14:44.876188  438136 system_svc.go:44] waiting for kubelet service to be running ....
	I0510 19:14:44.876258  438136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0510 19:14:44.898086  438136 system_svc.go:56] duration metric: took 21.884327ms WaitForService to wait for kubelet
	I0510 19:14:44.898192  438136 kubeadm.go:578] duration metric: took 493.259009ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0510 19:14:44.898239  438136 node_conditions.go:102] verifying NodePressure condition ...
	I0510 19:14:44.901896  438136 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0510 19:14:44.901931  438136 node_conditions.go:123] node cpu capacity is 2
	I0510 19:14:44.901948  438136 node_conditions.go:105] duration metric: took 3.689961ms to run NodePressure ...
	I0510 19:14:44.901964  438136 start.go:241] waiting for startup goroutines ...
	I0510 19:14:44.901975  438136 start.go:246] waiting for cluster config update ...
	I0510 19:14:44.901985  438136 start.go:255] writing updated cluster config ...
	I0510 19:14:44.902353  438136 ssh_runner.go:195] Run: rm -f paused
	I0510 19:14:44.909805  438136 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0510 19:14:44.910403  438136 kapi.go:59] client config for pause-317241: &rest.Config{Host:"https://192.168.39.10:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20720-388787/.minikube/profiles/pause-317241/client.crt", KeyFile:"/home/jenkins/minikube-integration/20720-388787/.minikube/profiles/pause-317241/client.key", CAFile:"/home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]
string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24b3a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0510 19:14:44.917353  438136 pod_ready.go:83] waiting for pod "coredns-674b8bbfcf-2cc2n" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:14:46.925690  438136 pod_ready.go:94] pod "coredns-674b8bbfcf-2cc2n" is "Ready"
	I0510 19:14:46.925745  438136 pod_ready.go:86] duration metric: took 2.008365481s for pod "coredns-674b8bbfcf-2cc2n" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:14:46.929701  438136 pod_ready.go:83] waiting for pod "etcd-pause-317241" in "kube-system" namespace to be "Ready" or be gone ...
	W0510 19:14:48.936817  438136 pod_ready.go:104] pod "etcd-pause-317241" is not "Ready", error: <nil>
	I0510 19:14:49.454736  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:49.455266  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | unable to find current IP address of domain force-systemd-env-429136 in network mk-force-systemd-env-429136
	I0510 19:14:49.455331  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | I0510 19:14:49.455240  438695 retry.go:31] will retry after 4.989984529s: waiting for domain to come up
	I0510 19:14:48.250955  438986 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
	I0510 19:14:48.250996  438986 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4
	I0510 19:14:48.251002  438986 cache.go:56] Caching tarball of preloaded images
	I0510 19:14:48.251079  438986 preload.go:172] Found /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0510 19:14:48.251085  438986 cache.go:59] Finished verifying existence of preloaded tar for v1.33.0 on crio
	I0510 19:14:48.251175  438986 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/cert-expiration-355262/config.json ...
	I0510 19:14:48.251187  438986 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/cert-expiration-355262/config.json: {Name:mkdd89f8ab0eb265ffaad36dbc023be1371ef075 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:14:48.251370  438986 start.go:360] acquireMachinesLock for cert-expiration-355262: {Name:mk11499d7756d503a7a24339ad1a7f9ab9dc0fab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	W0510 19:14:51.435421  438136 pod_ready.go:104] pod "etcd-pause-317241" is not "Ready", error: <nil>
	W0510 19:14:53.436710  438136 pod_ready.go:104] pod "etcd-pause-317241" is not "Ready", error: <nil>
	I0510 19:14:53.141686  435640 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0510 19:14:53.141890  435640 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:14:53.142148  435640 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:14:56.045727  438986 start.go:364] duration metric: took 7.794285786s to acquireMachinesLock for "cert-expiration-355262"
	I0510 19:14:56.045796  438986 start.go:93] Provisioning new machine with config: &{Name:cert-expiration-355262 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.33.0 ClusterName:cert-expiration-355262 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0510 19:14:56.045905  438986 start.go:125] createHost starting for "" (driver="kvm2")
	I0510 19:14:54.448458  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:54.449128  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has current primary IP address 192.168.50.10 and MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:54.449154  438479 main.go:141] libmachine: (force-systemd-env-429136) found domain IP: 192.168.50.10
	I0510 19:14:54.449168  438479 main.go:141] libmachine: (force-systemd-env-429136) reserving static IP address...
	I0510 19:14:54.449941  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | unable to find host DHCP lease matching {name: "force-systemd-env-429136", mac: "52:54:00:73:6a:05", ip: "192.168.50.10"} in network mk-force-systemd-env-429136
	I0510 19:14:54.544475  438479 main.go:141] libmachine: (force-systemd-env-429136) reserved static IP address 192.168.50.10 for domain force-systemd-env-429136
	I0510 19:14:54.544503  438479 main.go:141] libmachine: (force-systemd-env-429136) waiting for SSH...
	I0510 19:14:54.544543  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | Getting to WaitForSSH function...
	I0510 19:14:54.547653  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:54.548083  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6a:05", ip: ""} in network mk-force-systemd-env-429136: {Iface:virbr2 ExpiryTime:2025-05-10 20:14:49 +0000 UTC Type:0 Mac:52:54:00:73:6a:05 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:minikube Clientid:01:52:54:00:73:6a:05}
	I0510 19:14:54.548110  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined IP address 192.168.50.10 and MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:54.548244  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | Using SSH client type: external
	I0510 19:14:54.548299  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | Using SSH private key: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/force-systemd-env-429136/id_rsa (-rw-------)
	I0510 19:14:54.548357  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20720-388787/.minikube/machines/force-systemd-env-429136/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0510 19:14:54.548376  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | About to run SSH command:
	I0510 19:14:54.548387  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | exit 0
	I0510 19:14:54.676569  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | SSH cmd err, output: <nil>: 
	I0510 19:14:54.676872  438479 main.go:141] libmachine: (force-systemd-env-429136) KVM machine creation complete
	I0510 19:14:54.677308  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetConfigRaw
	I0510 19:14:54.678072  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .DriverName
	I0510 19:14:54.678302  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .DriverName
	I0510 19:14:54.678493  438479 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0510 19:14:54.678513  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetState
	I0510 19:14:54.680103  438479 main.go:141] libmachine: Detecting operating system of created instance...
	I0510 19:14:54.680121  438479 main.go:141] libmachine: Waiting for SSH to be available...
	I0510 19:14:54.680128  438479 main.go:141] libmachine: Getting to WaitForSSH function...
	I0510 19:14:54.680137  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHHostname
	I0510 19:14:54.683967  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:54.684415  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6a:05", ip: ""} in network mk-force-systemd-env-429136: {Iface:virbr2 ExpiryTime:2025-05-10 20:14:49 +0000 UTC Type:0 Mac:52:54:00:73:6a:05 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:force-systemd-env-429136 Clientid:01:52:54:00:73:6a:05}
	I0510 19:14:54.684446  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined IP address 192.168.50.10 and MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:54.684621  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHPort
	I0510 19:14:54.684818  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHKeyPath
	I0510 19:14:54.684993  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHKeyPath
	I0510 19:14:54.685173  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHUsername
	I0510 19:14:54.685348  438479 main.go:141] libmachine: Using SSH client type: native
	I0510 19:14:54.685756  438479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0510 19:14:54.685780  438479 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0510 19:14:54.795384  438479 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0510 19:14:54.795411  438479 main.go:141] libmachine: Detecting the provisioner...
	I0510 19:14:54.795420  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHHostname
	I0510 19:14:54.798774  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:54.799299  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6a:05", ip: ""} in network mk-force-systemd-env-429136: {Iface:virbr2 ExpiryTime:2025-05-10 20:14:49 +0000 UTC Type:0 Mac:52:54:00:73:6a:05 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:force-systemd-env-429136 Clientid:01:52:54:00:73:6a:05}
	I0510 19:14:54.799338  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined IP address 192.168.50.10 and MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:54.799539  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHPort
	I0510 19:14:54.799846  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHKeyPath
	I0510 19:14:54.800158  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHKeyPath
	I0510 19:14:54.800412  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHUsername
	I0510 19:14:54.800675  438479 main.go:141] libmachine: Using SSH client type: native
	I0510 19:14:54.800998  438479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0510 19:14:54.801014  438479 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0510 19:14:54.913550  438479 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2024.11.2-dirty
	ID=buildroot
	VERSION_ID=2024.11.2
	PRETTY_NAME="Buildroot 2024.11.2"
	
	I0510 19:14:54.913704  438479 main.go:141] libmachine: found compatible host: buildroot
	I0510 19:14:54.913715  438479 main.go:141] libmachine: Provisioning with buildroot...
	I0510 19:14:54.913723  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetMachineName
	I0510 19:14:54.914084  438479 buildroot.go:166] provisioning hostname "force-systemd-env-429136"
	I0510 19:14:54.914115  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetMachineName
	I0510 19:14:54.914289  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHHostname
	I0510 19:14:54.917302  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:54.917825  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6a:05", ip: ""} in network mk-force-systemd-env-429136: {Iface:virbr2 ExpiryTime:2025-05-10 20:14:49 +0000 UTC Type:0 Mac:52:54:00:73:6a:05 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:force-systemd-env-429136 Clientid:01:52:54:00:73:6a:05}
	I0510 19:14:54.917865  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined IP address 192.168.50.10 and MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:54.918094  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHPort
	I0510 19:14:54.918378  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHKeyPath
	I0510 19:14:54.918670  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHKeyPath
	I0510 19:14:54.918917  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHUsername
	I0510 19:14:54.919126  438479 main.go:141] libmachine: Using SSH client type: native
	I0510 19:14:54.919414  438479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0510 19:14:54.919432  438479 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-429136 && echo "force-systemd-env-429136" | sudo tee /etc/hostname
	I0510 19:14:55.051705  438479 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-429136
	
	I0510 19:14:55.051735  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHHostname
	I0510 19:14:55.054980  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:55.055378  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6a:05", ip: ""} in network mk-force-systemd-env-429136: {Iface:virbr2 ExpiryTime:2025-05-10 20:14:49 +0000 UTC Type:0 Mac:52:54:00:73:6a:05 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:force-systemd-env-429136 Clientid:01:52:54:00:73:6a:05}
	I0510 19:14:55.055410  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined IP address 192.168.50.10 and MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:55.055671  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHPort
	I0510 19:14:55.055933  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHKeyPath
	I0510 19:14:55.056131  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHKeyPath
	I0510 19:14:55.056292  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHUsername
	I0510 19:14:55.056449  438479 main.go:141] libmachine: Using SSH client type: native
	I0510 19:14:55.056654  438479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0510 19:14:55.056671  438479 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-429136' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-429136/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-429136' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0510 19:14:55.179121  438479 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0510 19:14:55.179165  438479 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20720-388787/.minikube CaCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20720-388787/.minikube}
	I0510 19:14:55.179189  438479 buildroot.go:174] setting up certificates
	I0510 19:14:55.179210  438479 provision.go:84] configureAuth start
	I0510 19:14:55.179224  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetMachineName
	I0510 19:14:55.179556  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetIP
	I0510 19:14:55.182772  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:55.183180  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6a:05", ip: ""} in network mk-force-systemd-env-429136: {Iface:virbr2 ExpiryTime:2025-05-10 20:14:49 +0000 UTC Type:0 Mac:52:54:00:73:6a:05 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:force-systemd-env-429136 Clientid:01:52:54:00:73:6a:05}
	I0510 19:14:55.183221  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined IP address 192.168.50.10 and MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:55.183361  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHHostname
	I0510 19:14:55.185826  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:55.186207  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6a:05", ip: ""} in network mk-force-systemd-env-429136: {Iface:virbr2 ExpiryTime:2025-05-10 20:14:49 +0000 UTC Type:0 Mac:52:54:00:73:6a:05 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:force-systemd-env-429136 Clientid:01:52:54:00:73:6a:05}
	I0510 19:14:55.186246  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined IP address 192.168.50.10 and MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:55.186452  438479 provision.go:143] copyHostCerts
	I0510 19:14:55.186492  438479 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem
	I0510 19:14:55.186546  438479 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem, removing ...
	I0510 19:14:55.186567  438479 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem
	I0510 19:14:55.186642  438479 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem (1078 bytes)
	I0510 19:14:55.186768  438479 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem
	I0510 19:14:55.186796  438479 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem, removing ...
	I0510 19:14:55.186804  438479 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem
	I0510 19:14:55.186840  438479 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem (1123 bytes)
	I0510 19:14:55.186924  438479 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem
	I0510 19:14:55.186949  438479 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem, removing ...
	I0510 19:14:55.186956  438479 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem
	I0510 19:14:55.186986  438479 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem (1675 bytes)
	I0510 19:14:55.187070  438479 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-429136 san=[127.0.0.1 192.168.50.10 force-systemd-env-429136 localhost minikube]
	I0510 19:14:55.317914  438479 provision.go:177] copyRemoteCerts
	I0510 19:14:55.318034  438479 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0510 19:14:55.318075  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHHostname
	I0510 19:14:55.322034  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:55.322556  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6a:05", ip: ""} in network mk-force-systemd-env-429136: {Iface:virbr2 ExpiryTime:2025-05-10 20:14:49 +0000 UTC Type:0 Mac:52:54:00:73:6a:05 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:force-systemd-env-429136 Clientid:01:52:54:00:73:6a:05}
	I0510 19:14:55.322626  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined IP address 192.168.50.10 and MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:55.322910  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHPort
	I0510 19:14:55.323167  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHKeyPath
	I0510 19:14:55.323414  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHUsername
	I0510 19:14:55.323628  438479 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/force-systemd-env-429136/id_rsa Username:docker}
	I0510 19:14:55.412415  438479 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0510 19:14:55.412500  438479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0510 19:14:55.448063  438479 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0510 19:14:55.448158  438479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0510 19:14:55.481600  438479 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0510 19:14:55.481696  438479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0510 19:14:55.513300  438479 provision.go:87] duration metric: took 334.074855ms to configureAuth
	I0510 19:14:55.513333  438479 buildroot.go:189] setting minikube options for container-runtime
	I0510 19:14:55.513511  438479 config.go:182] Loaded profile config "force-systemd-env-429136": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 19:14:55.513593  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHHostname
	I0510 19:14:55.516691  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:55.517048  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6a:05", ip: ""} in network mk-force-systemd-env-429136: {Iface:virbr2 ExpiryTime:2025-05-10 20:14:49 +0000 UTC Type:0 Mac:52:54:00:73:6a:05 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:force-systemd-env-429136 Clientid:01:52:54:00:73:6a:05}
	I0510 19:14:55.517097  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined IP address 192.168.50.10 and MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:55.517306  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHPort
	I0510 19:14:55.517508  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHKeyPath
	I0510 19:14:55.517664  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHKeyPath
	I0510 19:14:55.517818  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHUsername
	I0510 19:14:55.517991  438479 main.go:141] libmachine: Using SSH client type: native
	I0510 19:14:55.518225  438479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0510 19:14:55.518249  438479 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0510 19:14:55.767149  438479 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0510 19:14:55.767186  438479 main.go:141] libmachine: Checking connection to Docker...
	I0510 19:14:55.767199  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetURL
	I0510 19:14:55.768932  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | using libvirt version 6000000
	I0510 19:14:55.771734  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:55.772197  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6a:05", ip: ""} in network mk-force-systemd-env-429136: {Iface:virbr2 ExpiryTime:2025-05-10 20:14:49 +0000 UTC Type:0 Mac:52:54:00:73:6a:05 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:force-systemd-env-429136 Clientid:01:52:54:00:73:6a:05}
	I0510 19:14:55.772232  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined IP address 192.168.50.10 and MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:55.772439  438479 main.go:141] libmachine: Docker is up and running!
	I0510 19:14:55.772460  438479 main.go:141] libmachine: Reticulating splines...
	I0510 19:14:55.772471  438479 client.go:171] duration metric: took 23.964486496s to LocalClient.Create
	I0510 19:14:55.772503  438479 start.go:167] duration metric: took 23.964562021s to libmachine.API.Create "force-systemd-env-429136"
	I0510 19:14:55.772513  438479 start.go:293] postStartSetup for "force-systemd-env-429136" (driver="kvm2")
	I0510 19:14:55.772526  438479 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0510 19:14:55.772564  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .DriverName
	I0510 19:14:55.772895  438479 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0510 19:14:55.772946  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHHostname
	I0510 19:14:55.775991  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:55.776373  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6a:05", ip: ""} in network mk-force-systemd-env-429136: {Iface:virbr2 ExpiryTime:2025-05-10 20:14:49 +0000 UTC Type:0 Mac:52:54:00:73:6a:05 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:force-systemd-env-429136 Clientid:01:52:54:00:73:6a:05}
	I0510 19:14:55.776405  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined IP address 192.168.50.10 and MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:55.776568  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHPort
	I0510 19:14:55.776770  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHKeyPath
	I0510 19:14:55.776967  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHUsername
	I0510 19:14:55.777125  438479 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/force-systemd-env-429136/id_rsa Username:docker}
	I0510 19:14:55.868379  438479 ssh_runner.go:195] Run: cat /etc/os-release
	I0510 19:14:55.873883  438479 info.go:137] Remote host: Buildroot 2024.11.2
	I0510 19:14:55.873924  438479 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-388787/.minikube/addons for local assets ...
	I0510 19:14:55.874036  438479 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-388787/.minikube/files for local assets ...
	I0510 19:14:55.874126  438479 filesync.go:149] local asset: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem -> 3959802.pem in /etc/ssl/certs
	I0510 19:14:55.874137  438479 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem -> /etc/ssl/certs/3959802.pem
	I0510 19:14:55.874222  438479 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0510 19:14:55.887182  438479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem --> /etc/ssl/certs/3959802.pem (1708 bytes)
	I0510 19:14:55.918158  438479 start.go:296] duration metric: took 145.626627ms for postStartSetup
	I0510 19:14:55.918225  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetConfigRaw
	I0510 19:14:55.918985  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetIP
	I0510 19:14:55.922193  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:55.922631  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6a:05", ip: ""} in network mk-force-systemd-env-429136: {Iface:virbr2 ExpiryTime:2025-05-10 20:14:49 +0000 UTC Type:0 Mac:52:54:00:73:6a:05 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:force-systemd-env-429136 Clientid:01:52:54:00:73:6a:05}
	I0510 19:14:55.922667  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined IP address 192.168.50.10 and MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:55.922948  438479 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/force-systemd-env-429136/config.json ...
	I0510 19:14:55.923257  438479 start.go:128] duration metric: took 24.137801932s to createHost
	I0510 19:14:55.923296  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHHostname
	I0510 19:14:55.926652  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:55.927153  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6a:05", ip: ""} in network mk-force-systemd-env-429136: {Iface:virbr2 ExpiryTime:2025-05-10 20:14:49 +0000 UTC Type:0 Mac:52:54:00:73:6a:05 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:force-systemd-env-429136 Clientid:01:52:54:00:73:6a:05}
	I0510 19:14:55.927184  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined IP address 192.168.50.10 and MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:55.927407  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHPort
	I0510 19:14:55.927644  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHKeyPath
	I0510 19:14:55.927845  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHKeyPath
	I0510 19:14:55.928064  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHUsername
	I0510 19:14:55.928293  438479 main.go:141] libmachine: Using SSH client type: native
	I0510 19:14:55.928577  438479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0510 19:14:55.928603  438479 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0510 19:14:56.045470  438479 main.go:141] libmachine: SSH cmd err, output: <nil>: 1746904496.013750820
	
	I0510 19:14:56.045501  438479 fix.go:216] guest clock: 1746904496.013750820
	I0510 19:14:56.045510  438479 fix.go:229] Guest: 2025-05-10 19:14:56.01375082 +0000 UTC Remote: 2025-05-10 19:14:55.923274706 +0000 UTC m=+53.421171925 (delta=90.476114ms)
	I0510 19:14:56.045539  438479 fix.go:200] guest clock delta is within tolerance: 90.476114ms
	I0510 19:14:56.045547  438479 start.go:83] releasing machines lock for "force-systemd-env-429136", held for 24.260308883s
	I0510 19:14:56.045584  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .DriverName
	I0510 19:14:56.045969  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetIP
	I0510 19:14:56.049598  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:56.050169  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6a:05", ip: ""} in network mk-force-systemd-env-429136: {Iface:virbr2 ExpiryTime:2025-05-10 20:14:49 +0000 UTC Type:0 Mac:52:54:00:73:6a:05 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:force-systemd-env-429136 Clientid:01:52:54:00:73:6a:05}
	I0510 19:14:56.050219  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined IP address 192.168.50.10 and MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:56.050456  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .DriverName
	I0510 19:14:56.051127  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .DriverName
	I0510 19:14:56.051388  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .DriverName
	I0510 19:14:56.051510  438479 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0510 19:14:56.051561  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHHostname
	I0510 19:14:56.051718  438479 ssh_runner.go:195] Run: cat /version.json
	I0510 19:14:56.051754  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHHostname
	I0510 19:14:56.055032  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:56.055179  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:56.055471  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6a:05", ip: ""} in network mk-force-systemd-env-429136: {Iface:virbr2 ExpiryTime:2025-05-10 20:14:49 +0000 UTC Type:0 Mac:52:54:00:73:6a:05 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:force-systemd-env-429136 Clientid:01:52:54:00:73:6a:05}
	I0510 19:14:56.055502  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined IP address 192.168.50.10 and MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:56.055640  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHPort
	I0510 19:14:56.055671  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6a:05", ip: ""} in network mk-force-systemd-env-429136: {Iface:virbr2 ExpiryTime:2025-05-10 20:14:49 +0000 UTC Type:0 Mac:52:54:00:73:6a:05 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:force-systemd-env-429136 Clientid:01:52:54:00:73:6a:05}
	I0510 19:14:56.055703  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined IP address 192.168.50.10 and MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:56.055858  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHKeyPath
	I0510 19:14:56.055917  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHPort
	I0510 19:14:56.056034  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHUsername
	I0510 19:14:56.056089  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHKeyPath
	I0510 19:14:56.056187  438479 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/force-systemd-env-429136/id_rsa Username:docker}
	I0510 19:14:56.056245  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHUsername
	I0510 19:14:56.056350  438479 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/force-systemd-env-429136/id_rsa Username:docker}
	I0510 19:14:56.182883  438479 ssh_runner.go:195] Run: systemctl --version
	I0510 19:14:56.190481  438479 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0510 19:14:56.365502  438479 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0510 19:14:56.373679  438479 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0510 19:14:56.373798  438479 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0510 19:14:56.401720  438479 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0510 19:14:56.401757  438479 start.go:495] detecting cgroup driver to use...
	I0510 19:14:56.401782  438479 start.go:499] using "systemd" cgroup driver as enforced via flags
	I0510 19:14:56.401852  438479 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0510 19:14:56.421911  438479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0510 19:14:56.448206  438479 docker.go:225] disabling cri-docker service (if available) ...
	I0510 19:14:56.448309  438479 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0510 19:14:56.467186  438479 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0510 19:14:56.485155  438479 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0510 19:14:56.648898  438479 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0510 19:14:56.804830  438479 docker.go:241] disabling docker service ...
	I0510 19:14:56.804910  438479 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0510 19:14:56.828360  438479 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0510 19:14:56.846653  438479 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0510 19:14:57.057005  438479 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0510 19:14:57.223616  438479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0510 19:14:57.242019  438479 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0510 19:14:57.266512  438479 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0510 19:14:57.266610  438479 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:14:57.280442  438479 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0510 19:14:57.280556  438479 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:14:57.294568  438479 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:14:57.309110  438479 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:14:57.323359  438479 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0510 19:14:57.340438  438479 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:14:57.358828  438479 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:14:57.383071  438479 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:14:57.396699  438479 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0510 19:14:57.410766  438479 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0510 19:14:57.410854  438479 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0510 19:14:57.429895  438479 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0510 19:14:57.443195  438479 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 19:14:57.595025  438479 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0510 19:14:57.729093  438479 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0510 19:14:57.729180  438479 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0510 19:14:57.735100  438479 start.go:563] Will wait 60s for crictl version
	I0510 19:14:57.735176  438479 ssh_runner.go:195] Run: which crictl
	I0510 19:14:57.740583  438479 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0510 19:14:57.787499  438479 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0510 19:14:57.787616  438479 ssh_runner.go:195] Run: crio --version
	I0510 19:14:57.819784  438479 ssh_runner.go:195] Run: crio --version
	I0510 19:14:57.856202  438479 out.go:177] * Preparing Kubernetes v1.33.0 on CRI-O 1.29.1 ...
	I0510 19:14:56.048050  438986 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0510 19:14:56.048336  438986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:14:56.048415  438986 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:14:56.066612  438986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37639
	I0510 19:14:56.067368  438986 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:14:56.068238  438986 main.go:141] libmachine: Using API Version  1
	I0510 19:14:56.068285  438986 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:14:56.068783  438986 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:14:56.069031  438986 main.go:141] libmachine: (cert-expiration-355262) Calling .GetMachineName
	I0510 19:14:56.069222  438986 main.go:141] libmachine: (cert-expiration-355262) Calling .DriverName
	I0510 19:14:56.069396  438986 start.go:159] libmachine.API.Create for "cert-expiration-355262" (driver="kvm2")
	I0510 19:14:56.069428  438986 client.go:168] LocalClient.Create starting
	I0510 19:14:56.069459  438986 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem
	I0510 19:14:56.069507  438986 main.go:141] libmachine: Decoding PEM data...
	I0510 19:14:56.069531  438986 main.go:141] libmachine: Parsing certificate...
	I0510 19:14:56.069608  438986 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem
	I0510 19:14:56.069632  438986 main.go:141] libmachine: Decoding PEM data...
	I0510 19:14:56.069647  438986 main.go:141] libmachine: Parsing certificate...
	I0510 19:14:56.069710  438986 main.go:141] libmachine: Running pre-create checks...
	I0510 19:14:56.069720  438986 main.go:141] libmachine: (cert-expiration-355262) Calling .PreCreateCheck
	I0510 19:14:56.070181  438986 main.go:141] libmachine: (cert-expiration-355262) Calling .GetConfigRaw
	I0510 19:14:56.070735  438986 main.go:141] libmachine: Creating machine...
	I0510 19:14:56.070742  438986 main.go:141] libmachine: (cert-expiration-355262) Calling .Create
	I0510 19:14:56.070958  438986 main.go:141] libmachine: (cert-expiration-355262) creating KVM machine...
	I0510 19:14:56.070967  438986 main.go:141] libmachine: (cert-expiration-355262) creating network...
	I0510 19:14:56.072499  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | found existing default KVM network
	I0510 19:14:56.073524  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | I0510 19:14:56.073336  439059 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:91:b8:05} reservation:<nil>}
	I0510 19:14:56.074500  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | I0510 19:14:56.074373  439059 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:a8:ac:69} reservation:<nil>}
	I0510 19:14:56.075710  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | I0510 19:14:56.075575  439059 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003343d0}
	I0510 19:14:56.075720  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | created network xml: 
	I0510 19:14:56.075728  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | <network>
	I0510 19:14:56.075740  438986 main.go:141] libmachine: (cert-expiration-355262) DBG |   <name>mk-cert-expiration-355262</name>
	I0510 19:14:56.075745  438986 main.go:141] libmachine: (cert-expiration-355262) DBG |   <dns enable='no'/>
	I0510 19:14:56.075748  438986 main.go:141] libmachine: (cert-expiration-355262) DBG |   
	I0510 19:14:56.075754  438986 main.go:141] libmachine: (cert-expiration-355262) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0510 19:14:56.075758  438986 main.go:141] libmachine: (cert-expiration-355262) DBG |     <dhcp>
	I0510 19:14:56.075763  438986 main.go:141] libmachine: (cert-expiration-355262) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0510 19:14:56.075774  438986 main.go:141] libmachine: (cert-expiration-355262) DBG |     </dhcp>
	I0510 19:14:56.075778  438986 main.go:141] libmachine: (cert-expiration-355262) DBG |   </ip>
	I0510 19:14:56.075831  438986 main.go:141] libmachine: (cert-expiration-355262) DBG |   
	I0510 19:14:56.075865  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | </network>
	I0510 19:14:56.075887  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | 
	I0510 19:14:56.081627  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | trying to create private KVM network mk-cert-expiration-355262 192.168.61.0/24...
	I0510 19:14:56.172296  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | private KVM network mk-cert-expiration-355262 192.168.61.0/24 created
	I0510 19:14:56.172322  438986 main.go:141] libmachine: (cert-expiration-355262) setting up store path in /home/jenkins/minikube-integration/20720-388787/.minikube/machines/cert-expiration-355262 ...
	I0510 19:14:56.172335  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | I0510 19:14:56.172270  439059 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20720-388787/.minikube
	I0510 19:14:56.172345  438986 main.go:141] libmachine: (cert-expiration-355262) building disk image from file:///home/jenkins/minikube-integration/20720-388787/.minikube/cache/iso/amd64/minikube-v1.35.0-1746739450-20720-amd64.iso
	I0510 19:14:56.172483  438986 main.go:141] libmachine: (cert-expiration-355262) Downloading /home/jenkins/minikube-integration/20720-388787/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20720-388787/.minikube/cache/iso/amd64/minikube-v1.35.0-1746739450-20720-amd64.iso...
	I0510 19:14:56.486960  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | I0510 19:14:56.486815  439059 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/cert-expiration-355262/id_rsa...
	I0510 19:14:56.526378  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | I0510 19:14:56.526175  439059 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/cert-expiration-355262/cert-expiration-355262.rawdisk...
	I0510 19:14:56.526407  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | Writing magic tar header
	I0510 19:14:56.526426  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | Writing SSH key tar header
	I0510 19:14:56.526442  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | I0510 19:14:56.526309  439059 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20720-388787/.minikube/machines/cert-expiration-355262 ...
	I0510 19:14:56.526456  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/cert-expiration-355262
	I0510 19:14:56.526468  438986 main.go:141] libmachine: (cert-expiration-355262) setting executable bit set on /home/jenkins/minikube-integration/20720-388787/.minikube/machines/cert-expiration-355262 (perms=drwx------)
	I0510 19:14:56.526477  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20720-388787/.minikube/machines
	I0510 19:14:56.526497  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20720-388787/.minikube
	I0510 19:14:56.526505  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20720-388787
	I0510 19:14:56.526516  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0510 19:14:56.526523  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | checking permissions on dir: /home/jenkins
	I0510 19:14:56.526531  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | checking permissions on dir: /home
	I0510 19:14:56.526537  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | skipping /home - not owner
	I0510 19:14:56.526547  438986 main.go:141] libmachine: (cert-expiration-355262) setting executable bit set on /home/jenkins/minikube-integration/20720-388787/.minikube/machines (perms=drwxr-xr-x)
	I0510 19:14:56.526558  438986 main.go:141] libmachine: (cert-expiration-355262) setting executable bit set on /home/jenkins/minikube-integration/20720-388787/.minikube (perms=drwxr-xr-x)
	I0510 19:14:56.526566  438986 main.go:141] libmachine: (cert-expiration-355262) setting executable bit set on /home/jenkins/minikube-integration/20720-388787 (perms=drwxrwxr-x)
	I0510 19:14:56.526597  438986 main.go:141] libmachine: (cert-expiration-355262) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0510 19:14:56.526607  438986 main.go:141] libmachine: (cert-expiration-355262) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0510 19:14:56.526615  438986 main.go:141] libmachine: (cert-expiration-355262) creating domain...
	I0510 19:14:56.528079  438986 main.go:141] libmachine: (cert-expiration-355262) define libvirt domain using xml: 
	I0510 19:14:56.528088  438986 main.go:141] libmachine: (cert-expiration-355262) <domain type='kvm'>
	I0510 19:14:56.528093  438986 main.go:141] libmachine: (cert-expiration-355262)   <name>cert-expiration-355262</name>
	I0510 19:14:56.528099  438986 main.go:141] libmachine: (cert-expiration-355262)   <memory unit='MiB'>2048</memory>
	I0510 19:14:56.528106  438986 main.go:141] libmachine: (cert-expiration-355262)   <vcpu>2</vcpu>
	I0510 19:14:56.528110  438986 main.go:141] libmachine: (cert-expiration-355262)   <features>
	I0510 19:14:56.528120  438986 main.go:141] libmachine: (cert-expiration-355262)     <acpi/>
	I0510 19:14:56.528125  438986 main.go:141] libmachine: (cert-expiration-355262)     <apic/>
	I0510 19:14:56.528132  438986 main.go:141] libmachine: (cert-expiration-355262)     <pae/>
	I0510 19:14:56.528137  438986 main.go:141] libmachine: (cert-expiration-355262)     
	I0510 19:14:56.528143  438986 main.go:141] libmachine: (cert-expiration-355262)   </features>
	I0510 19:14:56.528147  438986 main.go:141] libmachine: (cert-expiration-355262)   <cpu mode='host-passthrough'>
	I0510 19:14:56.528151  438986 main.go:141] libmachine: (cert-expiration-355262)   
	I0510 19:14:56.528154  438986 main.go:141] libmachine: (cert-expiration-355262)   </cpu>
	I0510 19:14:56.528157  438986 main.go:141] libmachine: (cert-expiration-355262)   <os>
	I0510 19:14:56.528166  438986 main.go:141] libmachine: (cert-expiration-355262)     <type>hvm</type>
	I0510 19:14:56.528170  438986 main.go:141] libmachine: (cert-expiration-355262)     <boot dev='cdrom'/>
	I0510 19:14:56.528173  438986 main.go:141] libmachine: (cert-expiration-355262)     <boot dev='hd'/>
	I0510 19:14:56.528178  438986 main.go:141] libmachine: (cert-expiration-355262)     <bootmenu enable='no'/>
	I0510 19:14:56.528181  438986 main.go:141] libmachine: (cert-expiration-355262)   </os>
	I0510 19:14:56.528184  438986 main.go:141] libmachine: (cert-expiration-355262)   <devices>
	I0510 19:14:56.528188  438986 main.go:141] libmachine: (cert-expiration-355262)     <disk type='file' device='cdrom'>
	I0510 19:14:56.528195  438986 main.go:141] libmachine: (cert-expiration-355262)       <source file='/home/jenkins/minikube-integration/20720-388787/.minikube/machines/cert-expiration-355262/boot2docker.iso'/>
	I0510 19:14:56.528205  438986 main.go:141] libmachine: (cert-expiration-355262)       <target dev='hdc' bus='scsi'/>
	I0510 19:14:56.528209  438986 main.go:141] libmachine: (cert-expiration-355262)       <readonly/>
	I0510 19:14:56.528212  438986 main.go:141] libmachine: (cert-expiration-355262)     </disk>
	I0510 19:14:56.528230  438986 main.go:141] libmachine: (cert-expiration-355262)     <disk type='file' device='disk'>
	I0510 19:14:56.528234  438986 main.go:141] libmachine: (cert-expiration-355262)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0510 19:14:56.528242  438986 main.go:141] libmachine: (cert-expiration-355262)       <source file='/home/jenkins/minikube-integration/20720-388787/.minikube/machines/cert-expiration-355262/cert-expiration-355262.rawdisk'/>
	I0510 19:14:56.528245  438986 main.go:141] libmachine: (cert-expiration-355262)       <target dev='hda' bus='virtio'/>
	I0510 19:14:56.528249  438986 main.go:141] libmachine: (cert-expiration-355262)     </disk>
	I0510 19:14:56.528252  438986 main.go:141] libmachine: (cert-expiration-355262)     <interface type='network'>
	I0510 19:14:56.528256  438986 main.go:141] libmachine: (cert-expiration-355262)       <source network='mk-cert-expiration-355262'/>
	I0510 19:14:56.528260  438986 main.go:141] libmachine: (cert-expiration-355262)       <model type='virtio'/>
	I0510 19:14:56.528264  438986 main.go:141] libmachine: (cert-expiration-355262)     </interface>
	I0510 19:14:56.528267  438986 main.go:141] libmachine: (cert-expiration-355262)     <interface type='network'>
	I0510 19:14:56.528273  438986 main.go:141] libmachine: (cert-expiration-355262)       <source network='default'/>
	I0510 19:14:56.528279  438986 main.go:141] libmachine: (cert-expiration-355262)       <model type='virtio'/>
	I0510 19:14:56.528286  438986 main.go:141] libmachine: (cert-expiration-355262)     </interface>
	I0510 19:14:56.528291  438986 main.go:141] libmachine: (cert-expiration-355262)     <serial type='pty'>
	I0510 19:14:56.528298  438986 main.go:141] libmachine: (cert-expiration-355262)       <target port='0'/>
	I0510 19:14:56.528303  438986 main.go:141] libmachine: (cert-expiration-355262)     </serial>
	I0510 19:14:56.528309  438986 main.go:141] libmachine: (cert-expiration-355262)     <console type='pty'>
	I0510 19:14:56.528314  438986 main.go:141] libmachine: (cert-expiration-355262)       <target type='serial' port='0'/>
	I0510 19:14:56.528321  438986 main.go:141] libmachine: (cert-expiration-355262)     </console>
	I0510 19:14:56.528332  438986 main.go:141] libmachine: (cert-expiration-355262)     <rng model='virtio'>
	I0510 19:14:56.528341  438986 main.go:141] libmachine: (cert-expiration-355262)       <backend model='random'>/dev/random</backend>
	I0510 19:14:56.528347  438986 main.go:141] libmachine: (cert-expiration-355262)     </rng>
	I0510 19:14:56.528353  438986 main.go:141] libmachine: (cert-expiration-355262)     
	I0510 19:14:56.528357  438986 main.go:141] libmachine: (cert-expiration-355262)     
	I0510 19:14:56.528363  438986 main.go:141] libmachine: (cert-expiration-355262)   </devices>
	I0510 19:14:56.528367  438986 main.go:141] libmachine: (cert-expiration-355262) </domain>
	I0510 19:14:56.528378  438986 main.go:141] libmachine: (cert-expiration-355262) 
	I0510 19:14:56.533544  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | domain cert-expiration-355262 has defined MAC address 52:54:00:fd:5f:1d in network default
	I0510 19:14:56.534245  438986 main.go:141] libmachine: (cert-expiration-355262) starting domain...
	I0510 19:14:56.534268  438986 main.go:141] libmachine: (cert-expiration-355262) ensuring networks are active...
	I0510 19:14:56.534278  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | domain cert-expiration-355262 has defined MAC address 52:54:00:dd:9e:3a in network mk-cert-expiration-355262
	I0510 19:14:56.535023  438986 main.go:141] libmachine: (cert-expiration-355262) Ensuring network default is active
	I0510 19:14:56.535319  438986 main.go:141] libmachine: (cert-expiration-355262) Ensuring network mk-cert-expiration-355262 is active
	I0510 19:14:56.535875  438986 main.go:141] libmachine: (cert-expiration-355262) getting domain XML...
	I0510 19:14:56.536834  438986 main.go:141] libmachine: (cert-expiration-355262) creating domain...
	I0510 19:14:57.872647  438986 main.go:141] libmachine: (cert-expiration-355262) waiting for IP...
	I0510 19:14:57.873429  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | domain cert-expiration-355262 has defined MAC address 52:54:00:dd:9e:3a in network mk-cert-expiration-355262
	I0510 19:14:57.874139  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | unable to find current IP address of domain cert-expiration-355262 in network mk-cert-expiration-355262
	I0510 19:14:57.874277  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | I0510 19:14:57.874151  439059 retry.go:31] will retry after 219.598489ms: waiting for domain to come up
	I0510 19:14:58.096150  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | domain cert-expiration-355262 has defined MAC address 52:54:00:dd:9e:3a in network mk-cert-expiration-355262
	I0510 19:14:58.096683  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | unable to find current IP address of domain cert-expiration-355262 in network mk-cert-expiration-355262
	I0510 19:14:58.096747  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | I0510 19:14:58.096643  439059 retry.go:31] will retry after 376.155606ms: waiting for domain to come up
	W0510 19:14:55.437513  438136 pod_ready.go:104] pod "etcd-pause-317241" is not "Ready", error: <nil>
	I0510 19:14:56.941142  438136 pod_ready.go:94] pod "etcd-pause-317241" is "Ready"
	I0510 19:14:56.941176  438136 pod_ready.go:86] duration metric: took 10.011442056s for pod "etcd-pause-317241" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:14:56.943699  438136 pod_ready.go:83] waiting for pod "kube-apiserver-pause-317241" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:14:58.456585  438136 pod_ready.go:94] pod "kube-apiserver-pause-317241" is "Ready"
	I0510 19:14:58.456639  438136 pod_ready.go:86] duration metric: took 1.512907922s for pod "kube-apiserver-pause-317241" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:14:58.461913  438136 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-317241" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:14:58.470389  438136 pod_ready.go:94] pod "kube-controller-manager-pause-317241" is "Ready"
	I0510 19:14:58.470422  438136 pod_ready.go:86] duration metric: took 8.476599ms for pod "kube-controller-manager-pause-317241" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:14:58.474274  438136 pod_ready.go:83] waiting for pod "kube-proxy-skvbp" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:14:58.482439  438136 pod_ready.go:94] pod "kube-proxy-skvbp" is "Ready"
	I0510 19:14:58.482483  438136 pod_ready.go:86] duration metric: took 8.176534ms for pod "kube-proxy-skvbp" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:14:58.535212  438136 pod_ready.go:83] waiting for pod "kube-scheduler-pause-317241" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:14:58.934588  438136 pod_ready.go:94] pod "kube-scheduler-pause-317241" is "Ready"
	I0510 19:14:58.934617  438136 pod_ready.go:86] duration metric: took 399.338323ms for pod "kube-scheduler-pause-317241" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:14:58.934628  438136 pod_ready.go:40] duration metric: took 14.024775546s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0510 19:14:58.990570  438136 start.go:607] kubectl: 1.33.0, cluster: 1.33.0 (minor skew: 0)
	I0510 19:14:58.992949  438136 out.go:177] * Done! kubectl is now configured to use "pause-317241" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 10 19:14:59 pause-317241 crio[3017]: time="2025-05-10 19:14:59.956612262Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1c563a89-a571-442f-844c-5b6e976f9592 name=/runtime.v1.RuntimeService/Version
	May 10 19:14:59 pause-317241 crio[3017]: time="2025-05-10 19:14:59.958234487Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=209dc5dd-86b8-4759-ba09-e8b468fd7c07 name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:14:59 pause-317241 crio[3017]: time="2025-05-10 19:14:59.958941810Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746904499958662039,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125819,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=209dc5dd-86b8-4759-ba09-e8b468fd7c07 name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:14:59 pause-317241 crio[3017]: time="2025-05-10 19:14:59.959928850Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=98722e56-146f-46b9-8400-c0e236b14834 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:14:59 pause-317241 crio[3017]: time="2025-05-10 19:14:59.960004712Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=98722e56-146f-46b9-8400-c0e236b14834 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:14:59 pause-317241 crio[3017]: time="2025-05-10 19:14:59.960332446Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34e6c248131a5c5fe1f7df747e0077aab1986c049aba07113268029aa19ef292,PodSandboxId:fff448915d6c225f768a88c4107b0c411b288f3400557477da15aa1eef0285db,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1746904484029414801,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-2cc2n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1ecbdbb-8d9b-4ecf-a9a2-94d3478e1128,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2014853cb47d41b0658ce88cece8340e54d300cc95cc1ee4c8b1c6164a3e0fd4,PodSandboxId:5e10d365a839c16e827ee6151e426b67c48efefa36ada8ccdd191eedeec26997,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_RUNNING,CreatedAt:1746904483425536227,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-skvbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 08543e5b-1085-4de5-9922-16d2a027fb0e,},Annotations:map[string]string{io.kubernetes.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df19d449ad869e6c3b02da7edbd6ffb58d12f2d727f816ea15867bd4aa08d16,PodSandboxId:a39b3edf7ec64a2e10ac544b96954dd9ddcd04d24f72b9cddccfdee6ecf71de7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,State:CONTAINER_RUNNING,CreatedAt:1746904478790268218,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21bc14276c
d5381e92e6d9f4fa417bb5,},Annotations:map[string]string{io.kubernetes.container.hash: fd54b99d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b2b5024e1df577f880cf4796775a8b53c2f1235f0b94dfa501a7e713354a4dc,PodSandboxId:3800a382e0bce1f8041ef956ce5cf003fc7b3dd72c393bf2815edc04d3bb0fe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_RUNNING,CreatedAt:1746904478510784300,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
bd3407a5515906aae2ca3170d960a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d201484550c495e4de0fd8ba3f315ff5ee277b1f86a0dccd9c456e1bbc901089,PodSandboxId:90fc3520c3595d665025a9ed61fba9da3eafa398cec8e92f62e71365e516d7e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1746904478450688392,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a93f06e9953cac36959843399c2f269,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ac494014a96938db7a564fb5551c332727ec3747c4cbeadb8f0171a5dfbf786,PodSandboxId:b2b7064d6a453b2b80292646fc74a1172056293617f21148146f0c58df8aaa70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,State:CONTAINER_RUNNING,CreatedAt:1746904478424395512,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0714c860dd7b384d9ba10850530c253,},Annotations:map[string]string{io.
kubernetes.container.hash: 2e2dc675,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60d3f14ff709f7784986c33fb2487c2f5a652445cbba518e696441f701e452fb,PodSandboxId:856bd62f83b4aa95e06e560c66768f23474ea1a63edd1a9e38cccbb4abed762f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1746904382352867917,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-2cc2n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1ecbdbb-8d9b-4ecf-a9a2-94d3478e1128,},Annotations:map[string]string{io.kubernetes.container.hash: eafd0
92d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9945e09f6b6d261039029f89e168d45dd9fc4acf65b417b637a7704d3cc6df5,PodSandboxId:9b2c1f7aa1ba54b4aeb16b05ffcd2872a4d7af4e11f7ae21d5377762d1f6735a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_EXITED,CreatedAt:1746904381974441911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-skvbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08543e5b-1085-4de5-9922-16d2a027fb0e,},Annotations:map[string]string{io.kubernetes.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b96ba0c6867681eb5c3dd0df167dc56dd09ffcb675f8fa26472566e54feb7385,PodSandboxId:be700641b4f491936267c67677c6d291d70dd5bbb8ecdd6364b8f62f336bc473,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,State:CONTAINER_EXITED,CreatedAt:1746904368930343150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-paus
e-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21bc14276cd5381e92e6d9f4fa417bb5,},Annotations:map[string]string{io.kubernetes.container.hash: fd54b99d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6989a7e3ea042c054e6f979c8042e6a4f7c82fab32f4778857b936239f6db91c,PodSandboxId:e7cc467ddfff88dbdf134a7d835cc4352ab0454ce0419546d656884e172cf011,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1746904368903169884,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-317241,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: 1a93f06e9953cac36959843399c2f269,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b84f77943081f73cd80a1376987cceac5bbcb6932aaab74ffc59f9400d903650,PodSandboxId:afbad45567ddd6cd513797e628303d6d56e830f7a22f41bf1e684135608a128a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,State:CONTAINER_EXITED,CreatedAt:1746904368821503920,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a0714c860dd7b384d9ba10850530c253,},Annotations:map[string]string{io.kubernetes.container.hash: 2e2dc675,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac36579b810dd23a78153742e60a40498c4c2744c1c1b600d92974993419a57,PodSandboxId:8c48f0b1b59eb870657c77354fe231a0f6694e2aa715e77ecab6ba4083920287,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_EXITED,CreatedAt:1746904368701215523,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 2bd3407a5515906aae2ca3170d960a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=98722e56-146f-46b9-8400-c0e236b14834 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:14:59 pause-317241 crio[3017]: time="2025-05-10 19:14:59.988999705Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=145c3bdd-b954-45ec-b5a7-e91f2fd32472 name=/runtime.v1.RuntimeService/ListPodSandbox
	May 10 19:14:59 pause-317241 crio[3017]: time="2025-05-10 19:14:59.989215800Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:fff448915d6c225f768a88c4107b0c411b288f3400557477da15aa1eef0285db,Metadata:&PodSandboxMetadata{Name:coredns-674b8bbfcf-2cc2n,Uid:c1ecbdbb-8d9b-4ecf-a9a2-94d3478e1128,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1746904483348571269,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-674b8bbfcf-2cc2n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1ecbdbb-8d9b-4ecf-a9a2-94d3478e1128,k8s-app: kube-dns,pod-template-hash: 674b8bbfcf,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-05-10T19:14:42.853044983Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5e10d365a839c16e827ee6151e426b67c48efefa36ada8ccdd191eedeec26997,Metadata:&PodSandboxMetadata{Name:kube-proxy-skvbp,Uid:08543e5b-1085-4de5-9922-16d2a027fb0e,Namespace:kube-system,Attempt
:1,},State:SANDBOX_READY,CreatedAt:1746904483179594081,Labels:map[string]string{controller-revision-hash: 7b75d89869,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-skvbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08543e5b-1085-4de5-9922-16d2a027fb0e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-05-10T19:14:42.853042309Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a39b3edf7ec64a2e10ac544b96954dd9ddcd04d24f72b9cddccfdee6ecf71de7,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-317241,Uid:21bc14276cd5381e92e6d9f4fa417bb5,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1746904478436658175,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21bc14276cd5381e92e6d9f4fa417bb5,tier: control-plane,},Annotations:map[string
]string{kubernetes.io/config.hash: 21bc14276cd5381e92e6d9f4fa417bb5,kubernetes.io/config.seen: 2025-05-10T19:14:37.882070995Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3800a382e0bce1f8041ef956ce5cf003fc7b3dd72c393bf2815edc04d3bb0fe7,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-317241,Uid:2bd3407a5515906aae2ca3170d960a3a,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1746904474578627452,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bd3407a5515906aae2ca3170d960a3a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2bd3407a5515906aae2ca3170d960a3a,kubernetes.io/config.seen: 2025-05-10T19:12:55.729082235Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:90fc3520c3595d665025a9ed61fba9da3eafa398cec8e92f62e71365e516d7e2,Metadata:&PodSan
dboxMetadata{Name:etcd-pause-317241,Uid:1a93f06e9953cac36959843399c2f269,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1746904474556019325,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a93f06e9953cac36959843399c2f269,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.10:2379,kubernetes.io/config.hash: 1a93f06e9953cac36959843399c2f269,kubernetes.io/config.seen: 2025-05-10T19:12:55.729076145Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b2b7064d6a453b2b80292646fc74a1172056293617f21148146f0c58df8aaa70,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-317241,Uid:a0714c860dd7b384d9ba10850530c253,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1746904474525691374,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.k
ubernetes.pod.name: kube-apiserver-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0714c860dd7b384d9ba10850530c253,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.10:8443,kubernetes.io/config.hash: a0714c860dd7b384d9ba10850530c253,kubernetes.io/config.seen: 2025-05-10T19:12:55.729081032Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=145c3bdd-b954-45ec-b5a7-e91f2fd32472 name=/runtime.v1.RuntimeService/ListPodSandbox
	May 10 19:14:59 pause-317241 crio[3017]: time="2025-05-10 19:14:59.990445115Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=32af58e1-5a2b-4dae-873f-affc9786fad7 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:14:59 pause-317241 crio[3017]: time="2025-05-10 19:14:59.990509398Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=32af58e1-5a2b-4dae-873f-affc9786fad7 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:14:59 pause-317241 crio[3017]: time="2025-05-10 19:14:59.990798967Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34e6c248131a5c5fe1f7df747e0077aab1986c049aba07113268029aa19ef292,PodSandboxId:fff448915d6c225f768a88c4107b0c411b288f3400557477da15aa1eef0285db,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1746904484029414801,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-2cc2n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1ecbdbb-8d9b-4ecf-a9a2-94d3478e1128,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2014853cb47d41b0658ce88cece8340e54d300cc95cc1ee4c8b1c6164a3e0fd4,PodSandboxId:5e10d365a839c16e827ee6151e426b67c48efefa36ada8ccdd191eedeec26997,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_RUNNING,CreatedAt:1746904483425536227,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-skvbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 08543e5b-1085-4de5-9922-16d2a027fb0e,},Annotations:map[string]string{io.kubernetes.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df19d449ad869e6c3b02da7edbd6ffb58d12f2d727f816ea15867bd4aa08d16,PodSandboxId:a39b3edf7ec64a2e10ac544b96954dd9ddcd04d24f72b9cddccfdee6ecf71de7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,State:CONTAINER_RUNNING,CreatedAt:1746904478790268218,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21bc14276c
d5381e92e6d9f4fa417bb5,},Annotations:map[string]string{io.kubernetes.container.hash: fd54b99d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b2b5024e1df577f880cf4796775a8b53c2f1235f0b94dfa501a7e713354a4dc,PodSandboxId:3800a382e0bce1f8041ef956ce5cf003fc7b3dd72c393bf2815edc04d3bb0fe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_RUNNING,CreatedAt:1746904478510784300,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
bd3407a5515906aae2ca3170d960a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d201484550c495e4de0fd8ba3f315ff5ee277b1f86a0dccd9c456e1bbc901089,PodSandboxId:90fc3520c3595d665025a9ed61fba9da3eafa398cec8e92f62e71365e516d7e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1746904478450688392,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a93f06e9953cac36959843399c2f269,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ac494014a96938db7a564fb5551c332727ec3747c4cbeadb8f0171a5dfbf786,PodSandboxId:b2b7064d6a453b2b80292646fc74a1172056293617f21148146f0c58df8aaa70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,State:CONTAINER_RUNNING,CreatedAt:1746904478424395512,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0714c860dd7b384d9ba10850530c253,},Annotations:map[string]string{io.
kubernetes.container.hash: 2e2dc675,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=32af58e1-5a2b-4dae-873f-affc9786fad7 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:15:00 pause-317241 crio[3017]: time="2025-05-10 19:15:00.049092561Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=067d6012-761b-4409-a7bf-5dd579a13dd1 name=/runtime.v1.RuntimeService/Version
	May 10 19:15:00 pause-317241 crio[3017]: time="2025-05-10 19:15:00.049183280Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=067d6012-761b-4409-a7bf-5dd579a13dd1 name=/runtime.v1.RuntimeService/Version
	May 10 19:15:00 pause-317241 crio[3017]: time="2025-05-10 19:15:00.052324061Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9d8c493d-1aac-471a-a0db-04fa0eb3adf7 name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:15:00 pause-317241 crio[3017]: time="2025-05-10 19:15:00.052847543Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746904500052818795,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125819,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9d8c493d-1aac-471a-a0db-04fa0eb3adf7 name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:15:00 pause-317241 crio[3017]: time="2025-05-10 19:15:00.053547749Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f8bba4b8-e39b-4deb-91ab-fa07cc300505 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:15:00 pause-317241 crio[3017]: time="2025-05-10 19:15:00.053633357Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f8bba4b8-e39b-4deb-91ab-fa07cc300505 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:15:00 pause-317241 crio[3017]: time="2025-05-10 19:15:00.054016558Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34e6c248131a5c5fe1f7df747e0077aab1986c049aba07113268029aa19ef292,PodSandboxId:fff448915d6c225f768a88c4107b0c411b288f3400557477da15aa1eef0285db,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1746904484029414801,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-2cc2n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1ecbdbb-8d9b-4ecf-a9a2-94d3478e1128,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2014853cb47d41b0658ce88cece8340e54d300cc95cc1ee4c8b1c6164a3e0fd4,PodSandboxId:5e10d365a839c16e827ee6151e426b67c48efefa36ada8ccdd191eedeec26997,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_RUNNING,CreatedAt:1746904483425536227,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-skvbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 08543e5b-1085-4de5-9922-16d2a027fb0e,},Annotations:map[string]string{io.kubernetes.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df19d449ad869e6c3b02da7edbd6ffb58d12f2d727f816ea15867bd4aa08d16,PodSandboxId:a39b3edf7ec64a2e10ac544b96954dd9ddcd04d24f72b9cddccfdee6ecf71de7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,State:CONTAINER_RUNNING,CreatedAt:1746904478790268218,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21bc14276c
d5381e92e6d9f4fa417bb5,},Annotations:map[string]string{io.kubernetes.container.hash: fd54b99d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b2b5024e1df577f880cf4796775a8b53c2f1235f0b94dfa501a7e713354a4dc,PodSandboxId:3800a382e0bce1f8041ef956ce5cf003fc7b3dd72c393bf2815edc04d3bb0fe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_RUNNING,CreatedAt:1746904478510784300,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
bd3407a5515906aae2ca3170d960a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d201484550c495e4de0fd8ba3f315ff5ee277b1f86a0dccd9c456e1bbc901089,PodSandboxId:90fc3520c3595d665025a9ed61fba9da3eafa398cec8e92f62e71365e516d7e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1746904478450688392,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a93f06e9953cac36959843399c2f269,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ac494014a96938db7a564fb5551c332727ec3747c4cbeadb8f0171a5dfbf786,PodSandboxId:b2b7064d6a453b2b80292646fc74a1172056293617f21148146f0c58df8aaa70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,State:CONTAINER_RUNNING,CreatedAt:1746904478424395512,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0714c860dd7b384d9ba10850530c253,},Annotations:map[string]string{io.
kubernetes.container.hash: 2e2dc675,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60d3f14ff709f7784986c33fb2487c2f5a652445cbba518e696441f701e452fb,PodSandboxId:856bd62f83b4aa95e06e560c66768f23474ea1a63edd1a9e38cccbb4abed762f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1746904382352867917,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-2cc2n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1ecbdbb-8d9b-4ecf-a9a2-94d3478e1128,},Annotations:map[string]string{io.kubernetes.container.hash: eafd0
92d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9945e09f6b6d261039029f89e168d45dd9fc4acf65b417b637a7704d3cc6df5,PodSandboxId:9b2c1f7aa1ba54b4aeb16b05ffcd2872a4d7af4e11f7ae21d5377762d1f6735a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_EXITED,CreatedAt:1746904381974441911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-skvbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08543e5b-1085-4de5-9922-16d2a027fb0e,},Annotations:map[string]string{io.kubernetes.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b96ba0c6867681eb5c3dd0df167dc56dd09ffcb675f8fa26472566e54feb7385,PodSandboxId:be700641b4f491936267c67677c6d291d70dd5bbb8ecdd6364b8f62f336bc473,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,State:CONTAINER_EXITED,CreatedAt:1746904368930343150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-paus
e-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21bc14276cd5381e92e6d9f4fa417bb5,},Annotations:map[string]string{io.kubernetes.container.hash: fd54b99d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6989a7e3ea042c054e6f979c8042e6a4f7c82fab32f4778857b936239f6db91c,PodSandboxId:e7cc467ddfff88dbdf134a7d835cc4352ab0454ce0419546d656884e172cf011,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1746904368903169884,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-317241,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: 1a93f06e9953cac36959843399c2f269,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b84f77943081f73cd80a1376987cceac5bbcb6932aaab74ffc59f9400d903650,PodSandboxId:afbad45567ddd6cd513797e628303d6d56e830f7a22f41bf1e684135608a128a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,State:CONTAINER_EXITED,CreatedAt:1746904368821503920,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a0714c860dd7b384d9ba10850530c253,},Annotations:map[string]string{io.kubernetes.container.hash: 2e2dc675,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac36579b810dd23a78153742e60a40498c4c2744c1c1b600d92974993419a57,PodSandboxId:8c48f0b1b59eb870657c77354fe231a0f6694e2aa715e77ecab6ba4083920287,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_EXITED,CreatedAt:1746904368701215523,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 2bd3407a5515906aae2ca3170d960a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f8bba4b8-e39b-4deb-91ab-fa07cc300505 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:15:00 pause-317241 crio[3017]: time="2025-05-10 19:15:00.120043401Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=160578ee-b497-4bfc-a608-3d2bf20d9c5f name=/runtime.v1.RuntimeService/Version
	May 10 19:15:00 pause-317241 crio[3017]: time="2025-05-10 19:15:00.120124880Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=160578ee-b497-4bfc-a608-3d2bf20d9c5f name=/runtime.v1.RuntimeService/Version
	May 10 19:15:00 pause-317241 crio[3017]: time="2025-05-10 19:15:00.121660248Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=53285925-be4e-4aaa-831f-6ceacd9a665e name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:15:00 pause-317241 crio[3017]: time="2025-05-10 19:15:00.122190590Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746904500122163179,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125819,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=53285925-be4e-4aaa-831f-6ceacd9a665e name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:15:00 pause-317241 crio[3017]: time="2025-05-10 19:15:00.123214663Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cf287483-fa17-412f-8e9f-af52835e10ab name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:15:00 pause-317241 crio[3017]: time="2025-05-10 19:15:00.123277051Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cf287483-fa17-412f-8e9f-af52835e10ab name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:15:00 pause-317241 crio[3017]: time="2025-05-10 19:15:00.123885029Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34e6c248131a5c5fe1f7df747e0077aab1986c049aba07113268029aa19ef292,PodSandboxId:fff448915d6c225f768a88c4107b0c411b288f3400557477da15aa1eef0285db,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1746904484029414801,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-2cc2n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1ecbdbb-8d9b-4ecf-a9a2-94d3478e1128,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2014853cb47d41b0658ce88cece8340e54d300cc95cc1ee4c8b1c6164a3e0fd4,PodSandboxId:5e10d365a839c16e827ee6151e426b67c48efefa36ada8ccdd191eedeec26997,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_RUNNING,CreatedAt:1746904483425536227,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-skvbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 08543e5b-1085-4de5-9922-16d2a027fb0e,},Annotations:map[string]string{io.kubernetes.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df19d449ad869e6c3b02da7edbd6ffb58d12f2d727f816ea15867bd4aa08d16,PodSandboxId:a39b3edf7ec64a2e10ac544b96954dd9ddcd04d24f72b9cddccfdee6ecf71de7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,State:CONTAINER_RUNNING,CreatedAt:1746904478790268218,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21bc14276c
d5381e92e6d9f4fa417bb5,},Annotations:map[string]string{io.kubernetes.container.hash: fd54b99d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b2b5024e1df577f880cf4796775a8b53c2f1235f0b94dfa501a7e713354a4dc,PodSandboxId:3800a382e0bce1f8041ef956ce5cf003fc7b3dd72c393bf2815edc04d3bb0fe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_RUNNING,CreatedAt:1746904478510784300,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
bd3407a5515906aae2ca3170d960a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d201484550c495e4de0fd8ba3f315ff5ee277b1f86a0dccd9c456e1bbc901089,PodSandboxId:90fc3520c3595d665025a9ed61fba9da3eafa398cec8e92f62e71365e516d7e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1746904478450688392,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a93f06e9953cac36959843399c2f269,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ac494014a96938db7a564fb5551c332727ec3747c4cbeadb8f0171a5dfbf786,PodSandboxId:b2b7064d6a453b2b80292646fc74a1172056293617f21148146f0c58df8aaa70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,State:CONTAINER_RUNNING,CreatedAt:1746904478424395512,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0714c860dd7b384d9ba10850530c253,},Annotations:map[string]string{io.
kubernetes.container.hash: 2e2dc675,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60d3f14ff709f7784986c33fb2487c2f5a652445cbba518e696441f701e452fb,PodSandboxId:856bd62f83b4aa95e06e560c66768f23474ea1a63edd1a9e38cccbb4abed762f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1746904382352867917,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-2cc2n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1ecbdbb-8d9b-4ecf-a9a2-94d3478e1128,},Annotations:map[string]string{io.kubernetes.container.hash: eafd0
92d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9945e09f6b6d261039029f89e168d45dd9fc4acf65b417b637a7704d3cc6df5,PodSandboxId:9b2c1f7aa1ba54b4aeb16b05ffcd2872a4d7af4e11f7ae21d5377762d1f6735a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_EXITED,CreatedAt:1746904381974441911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-skvbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08543e5b-1085-4de5-9922-16d2a027fb0e,},Annotations:map[string]string{io.kubernetes.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b96ba0c6867681eb5c3dd0df167dc56dd09ffcb675f8fa26472566e54feb7385,PodSandboxId:be700641b4f491936267c67677c6d291d70dd5bbb8ecdd6364b8f62f336bc473,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,State:CONTAINER_EXITED,CreatedAt:1746904368930343150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-paus
e-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21bc14276cd5381e92e6d9f4fa417bb5,},Annotations:map[string]string{io.kubernetes.container.hash: fd54b99d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6989a7e3ea042c054e6f979c8042e6a4f7c82fab32f4778857b936239f6db91c,PodSandboxId:e7cc467ddfff88dbdf134a7d835cc4352ab0454ce0419546d656884e172cf011,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1746904368903169884,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-317241,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: 1a93f06e9953cac36959843399c2f269,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b84f77943081f73cd80a1376987cceac5bbcb6932aaab74ffc59f9400d903650,PodSandboxId:afbad45567ddd6cd513797e628303d6d56e830f7a22f41bf1e684135608a128a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,State:CONTAINER_EXITED,CreatedAt:1746904368821503920,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a0714c860dd7b384d9ba10850530c253,},Annotations:map[string]string{io.kubernetes.container.hash: 2e2dc675,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac36579b810dd23a78153742e60a40498c4c2744c1c1b600d92974993419a57,PodSandboxId:8c48f0b1b59eb870657c77354fe231a0f6694e2aa715e77ecab6ba4083920287,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_EXITED,CreatedAt:1746904368701215523,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 2bd3407a5515906aae2ca3170d960a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cf287483-fa17-412f-8e9f-af52835e10ab name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	34e6c248131a5       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b   16 seconds ago       Running             coredns                   1                   fff448915d6c2       coredns-674b8bbfcf-2cc2n
	2014853cb47d4       f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68   16 seconds ago       Running             kube-proxy                1                   5e10d365a839c       kube-proxy-skvbp
	3df19d449ad86       8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4   21 seconds ago       Running             kube-scheduler            1                   a39b3edf7ec64       kube-scheduler-pause-317241
	5b2b5024e1df5       1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02   21 seconds ago       Running             kube-controller-manager   1                   3800a382e0bce       kube-controller-manager-pause-317241
	d201484550c49       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1   21 seconds ago       Running             etcd                      1                   90fc3520c3595       etcd-pause-317241
	2ac494014a969       6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4   21 seconds ago       Running             kube-apiserver            1                   b2b7064d6a453       kube-apiserver-pause-317241
	60d3f14ff709f       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b   About a minute ago   Exited              coredns                   0                   856bd62f83b4a       coredns-674b8bbfcf-2cc2n
	c9945e09f6b6d       f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68   About a minute ago   Exited              kube-proxy                0                   9b2c1f7aa1ba5       kube-proxy-skvbp
	b96ba0c686768       8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4   2 minutes ago        Exited              kube-scheduler            0                   be700641b4f49       kube-scheduler-pause-317241
	6989a7e3ea042       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1   2 minutes ago        Exited              etcd                      0                   e7cc467ddfff8       etcd-pause-317241
	b84f77943081f       6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4   2 minutes ago        Exited              kube-apiserver            0                   afbad45567ddd       kube-apiserver-pause-317241
	5ac36579b810d       1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02   2 minutes ago        Exited              kube-controller-manager   0                   8c48f0b1b59eb       kube-controller-manager-pause-317241
	
	
	==> coredns [34e6c248131a5c5fe1f7df747e0077aab1986c049aba07113268029aa19ef292] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] 127.0.0.1:42670 - 30856 "HINFO IN 7684510342706217908.6727844740725176358. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016294319s
	
	
	==> coredns [60d3f14ff709f7784986c33fb2487c2f5a652445cbba518e696441f701e452fb] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] 127.0.0.1:47641 - 22513 "HINFO IN 1095390168192950703.4527940315372221367. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019643509s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-317241
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-317241
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e96c83983357cd8557f3cdfe077a25cc73d485a4
	                    minikube.k8s.io/name=pause-317241
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_05_10T19_12_56_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 May 2025 19:12:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-317241
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 May 2025 19:14:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 May 2025 19:14:42 +0000   Sat, 10 May 2025 19:12:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 May 2025 19:14:42 +0000   Sat, 10 May 2025 19:12:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 May 2025 19:14:42 +0000   Sat, 10 May 2025 19:12:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 May 2025 19:14:42 +0000   Sat, 10 May 2025 19:12:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.10
	  Hostname:    pause-317241
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015664Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015664Ki
	  pods:               110
	System Info:
	  Machine ID:                 f4188d6039414f158be6e0dfa4fac62c
	  System UUID:                f4188d60-3941-4f15-8be6-e0dfa4fac62c
	  Boot ID:                    9e56bab3-4e29-44a7-88b2-5d509d360c89
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2024.11.2
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.33.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-674b8bbfcf-2cc2n                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     119s
	  kube-system                 etcd-pause-317241                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         2m4s
	  kube-system                 kube-apiserver-pause-317241             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-controller-manager-pause-317241    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-proxy-skvbp                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-scheduler-pause-317241             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 16s                    kube-proxy       
	  Normal  Starting                 117s                   kube-proxy       
	  Normal  NodeAllocatableEnforced  2m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     2m12s (x7 over 2m13s)  kubelet          Node pause-317241 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m12s (x8 over 2m13s)  kubelet          Node pause-317241 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m12s (x8 over 2m13s)  kubelet          Node pause-317241 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m5s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m5s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     2m4s                   kubelet          Node pause-317241 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m4s                   kubelet          Node pause-317241 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                2m4s                   kubelet          Node pause-317241 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  2m4s                   kubelet          Node pause-317241 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           2m                     node-controller  Node pause-317241 event: Registered Node pause-317241 in Controller
	  Normal  Starting                 23s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)      kubelet          Node pause-317241 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)      kubelet          Node pause-317241 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)      kubelet          Node pause-317241 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15s                    node-controller  Node pause-317241 event: Registered Node pause-317241 in Controller
	
	
	==> dmesg <==
	[May10 19:12] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.000002] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.000035] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.001438] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.006667] (rpcbind)[143]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.135091] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000005] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.103297] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.117767] kauditd_printk_skb: 46 callbacks suppressed
	[  +0.119071] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.155064] kauditd_printk_skb: 67 callbacks suppressed
	[May10 19:13] kauditd_printk_skb: 19 callbacks suppressed
	[ +10.930868] kauditd_printk_skb: 66 callbacks suppressed
	[ +24.886516] kauditd_printk_skb: 22 callbacks suppressed
	[May10 19:14] kauditd_printk_skb: 105 callbacks suppressed
	[  +0.000086] kauditd_printk_skb: 39 callbacks suppressed
	
	
	==> etcd [6989a7e3ea042c054e6f979c8042e6a4f7c82fab32f4778857b936239f6db91c] <==
	{"level":"info","ts":"2025-05-10T19:12:49.424578Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e became leader at term 2"}
	{"level":"info","ts":"2025-05-10T19:12:49.424672Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f8926bd555ec3d0e elected leader f8926bd555ec3d0e at term 2"}
	{"level":"info","ts":"2025-05-10T19:12:49.427557Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"f8926bd555ec3d0e","local-member-attributes":"{Name:pause-317241 ClientURLs:[https://192.168.39.10:2379]}","request-path":"/0/members/f8926bd555ec3d0e/attributes","cluster-id":"3a710b3f69152e32","publish-timeout":"7s"}
	{"level":"info","ts":"2025-05-10T19:12:49.427793Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T19:12:49.428362Z","caller":"etcdserver/server.go:2697","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-05-10T19:12:49.434259Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T19:12:49.440811Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.10:2379"}
	{"level":"info","ts":"2025-05-10T19:12:49.434614Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T19:12:49.436178Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-05-10T19:12:49.442225Z","caller":"membership/cluster.go:587","msg":"set initial cluster version","cluster-id":"3a710b3f69152e32","local-member-id":"f8926bd555ec3d0e","cluster-version":"3.5"}
	{"level":"info","ts":"2025-05-10T19:12:49.445201Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-05-10T19:12:49.445365Z","caller":"etcdserver/server.go:2721","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-05-10T19:12:49.445678Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-05-10T19:12:49.446482Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T19:12:49.447622Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-05-10T19:14:25.463058Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-05-10T19:14:25.463232Z","caller":"embed/etcd.go:408","msg":"closing etcd server","name":"pause-317241","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.10:2380"],"advertise-client-urls":["https://192.168.39.10:2379"]}
	{"level":"info","ts":"2025-05-10T19:14:25.539900Z","caller":"etcdserver/server.go:1546","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f8926bd555ec3d0e","current-leader-member-id":"f8926bd555ec3d0e"}
	{"level":"warn","ts":"2025-05-10T19:14:25.540154Z","caller":"embed/serve.go:235","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-05-10T19:14:25.540172Z","caller":"embed/serve.go:235","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.10:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-05-10T19:14:25.540468Z","caller":"embed/serve.go:237","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.10:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-05-10T19:14:25.540298Z","caller":"embed/serve.go:237","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-05-10T19:14:25.544036Z","caller":"embed/etcd.go:613","msg":"stopping serving peer traffic","address":"192.168.39.10:2380"}
	{"level":"info","ts":"2025-05-10T19:14:25.544214Z","caller":"embed/etcd.go:618","msg":"stopped serving peer traffic","address":"192.168.39.10:2380"}
	{"level":"info","ts":"2025-05-10T19:14:25.544264Z","caller":"embed/etcd.go:410","msg":"closed etcd server","name":"pause-317241","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.10:2380"],"advertise-client-urls":["https://192.168.39.10:2379"]}
	
	
	==> etcd [d201484550c495e4de0fd8ba3f315ff5ee277b1f86a0dccd9c456e1bbc901089] <==
	{"level":"info","ts":"2025-05-10T19:14:38.984901Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-05-10T19:14:38.985073Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-05-10T19:14:38.985108Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-05-10T19:14:38.986985Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T19:14:38.990684Z","caller":"embed/etcd.go:762","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-05-10T19:14:38.991360Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"f8926bd555ec3d0e","initial-advertise-peer-urls":["https://192.168.39.10:2380"],"listen-peer-urls":["https://192.168.39.10:2380"],"advertise-client-urls":["https://192.168.39.10:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.10:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-05-10T19:14:38.991486Z","caller":"embed/etcd.go:908","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-05-10T19:14:38.991845Z","caller":"embed/etcd.go:633","msg":"serving peer traffic","address":"192.168.39.10:2380"}
	{"level":"info","ts":"2025-05-10T19:14:38.992284Z","caller":"embed/etcd.go:603","msg":"cmux::serve","address":"192.168.39.10:2380"}
	{"level":"info","ts":"2025-05-10T19:14:39.324830Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e is starting a new election at term 2"}
	{"level":"info","ts":"2025-05-10T19:14:39.324956Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e became pre-candidate at term 2"}
	{"level":"info","ts":"2025-05-10T19:14:39.324998Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e received MsgPreVoteResp from f8926bd555ec3d0e at term 2"}
	{"level":"info","ts":"2025-05-10T19:14:39.325033Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e became candidate at term 3"}
	{"level":"info","ts":"2025-05-10T19:14:39.325138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e received MsgVoteResp from f8926bd555ec3d0e at term 3"}
	{"level":"info","ts":"2025-05-10T19:14:39.325192Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e became leader at term 3"}
	{"level":"info","ts":"2025-05-10T19:14:39.325222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f8926bd555ec3d0e elected leader f8926bd555ec3d0e at term 3"}
	{"level":"info","ts":"2025-05-10T19:14:39.334004Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"f8926bd555ec3d0e","local-member-attributes":"{Name:pause-317241 ClientURLs:[https://192.168.39.10:2379]}","request-path":"/0/members/f8926bd555ec3d0e/attributes","cluster-id":"3a710b3f69152e32","publish-timeout":"7s"}
	{"level":"info","ts":"2025-05-10T19:14:39.334132Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T19:14:39.336784Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-05-10T19:14:39.336836Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-05-10T19:14:39.334170Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T19:14:39.338288Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T19:14:39.339020Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.10:2379"}
	{"level":"info","ts":"2025-05-10T19:14:39.341169Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T19:14:39.343660Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:15:00 up 2 min,  0 user,  load average: 1.37, 0.58, 0.22
	Linux pause-317241 5.10.207 #1 SMP Fri May 9 03:49:24 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2024.11.2"
	
	
	==> kube-apiserver [2ac494014a96938db7a564fb5551c332727ec3747c4cbeadb8f0171a5dfbf786] <==
	I0510 19:14:42.048000       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0510 19:14:42.051861       1 shared_informer.go:357] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0510 19:14:42.052000       1 default_servicecidr_controller.go:136] Shutting down kubernetes-service-cidr-controller
	I0510 19:14:42.060410       1 cache.go:39] Caches are synced for autoregister controller
	I0510 19:14:42.060971       1 shared_informer.go:357] "Caches are synced" controller="ipallocator-repair-controller"
	I0510 19:14:42.061345       1 shared_informer.go:357] "Caches are synced" controller="node_authorizer"
	I0510 19:14:42.064449       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 19:14:42.073914       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0510 19:14:42.074441       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0510 19:14:42.074759       1 shared_informer.go:357] "Caches are synced" controller="configmaps"
	I0510 19:14:42.075770       1 shared_informer.go:357] "Caches are synced" controller="cluster_authentication_trust_controller"
	I0510 19:14:42.075818       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0510 19:14:42.075826       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0510 19:14:42.078308       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0510 19:14:42.079596       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0510 19:14:42.884386       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0510 19:14:43.002373       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0510 19:14:43.884951       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0510 19:14:44.007664       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0510 19:14:44.245831       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0510 19:14:44.324578       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0510 19:14:45.463864       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 19:14:45.603319       1 controller.go:667] quota admission added evaluator for: endpoints
	I0510 19:14:45.708024       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0510 19:14:45.855557       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [b84f77943081f73cd80a1376987cceac5bbcb6932aaab74ffc59f9400d903650] <==
	W0510 19:14:25.476669       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.476798       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.476967       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.477170       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.477309       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.477387       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.477499       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.477621       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.477771       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.477874       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.478002       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.478117       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.479240       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.479416       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.479551       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.479674       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.479908       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.480120       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.480242       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.480364       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.480549       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.480579       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.480695       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.480897       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.481009       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [5ac36579b810dd23a78153742e60a40498c4c2744c1c1b600d92974993419a57] <==
	I0510 19:13:00.306988       1 shared_informer.go:357] "Caches are synced" controller="node"
	I0510 19:13:00.308301       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0510 19:13:00.308374       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0510 19:13:00.308397       1 shared_informer.go:350] "Waiting for caches to sync" controller="cidrallocator"
	I0510 19:13:00.308415       1 shared_informer.go:357] "Caches are synced" controller="cidrallocator"
	I0510 19:13:00.320914       1 shared_informer.go:357] "Caches are synced" controller="ReplicationController"
	I0510 19:13:00.342666       1 shared_informer.go:357] "Caches are synced" controller="endpoint"
	I0510 19:13:00.342817       1 shared_informer.go:357] "Caches are synced" controller="bootstrap_signer"
	I0510 19:13:00.344347       1 shared_informer.go:357] "Caches are synced" controller="ClusterRoleAggregator"
	I0510 19:13:00.345471       1 shared_informer.go:357] "Caches are synced" controller="stateful set"
	I0510 19:13:00.351334       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-317241" podCIDRs=["10.244.0.0/24"]
	I0510 19:13:00.392698       1 shared_informer.go:357] "Caches are synced" controller="attach detach"
	I0510 19:13:00.392814       1 shared_informer.go:357] "Caches are synced" controller="persistent volume"
	I0510 19:13:00.499983       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0510 19:13:00.502930       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0510 19:13:00.504418       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0510 19:13:00.594171       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrapproving"
	I0510 19:13:00.598974       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 19:13:00.599238       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0510 19:13:00.614435       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 19:13:00.627576       1 shared_informer.go:357] "Caches are synced" controller="HPA"
	I0510 19:13:01.040774       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0510 19:13:01.040799       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0510 19:13:01.040806       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0510 19:13:01.049892       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [5b2b5024e1df577f880cf4796775a8b53c2f1235f0b94dfa501a7e713354a4dc] <==
	I0510 19:14:45.363505       1 shared_informer.go:357] "Caches are synced" controller="TTL after finished"
	I0510 19:14:45.366481       1 shared_informer.go:357] "Caches are synced" controller="attach detach"
	I0510 19:14:45.373838       1 shared_informer.go:357] "Caches are synced" controller="ClusterRoleAggregator"
	I0510 19:14:45.378273       1 shared_informer.go:357] "Caches are synced" controller="taint-eviction-controller"
	I0510 19:14:45.382845       1 shared_informer.go:357] "Caches are synced" controller="service-cidr-controller"
	I0510 19:14:45.393909       1 shared_informer.go:357] "Caches are synced" controller="job"
	I0510 19:14:45.399495       1 shared_informer.go:357] "Caches are synced" controller="namespace"
	I0510 19:14:45.399633       1 shared_informer.go:357] "Caches are synced" controller="endpoint"
	I0510 19:14:45.401646       1 shared_informer.go:357] "Caches are synced" controller="GC"
	I0510 19:14:45.401807       1 shared_informer.go:357] "Caches are synced" controller="taint"
	I0510 19:14:45.401941       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0510 19:14:45.402043       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-317241"
	I0510 19:14:45.402155       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0510 19:14:45.409262       1 shared_informer.go:357] "Caches are synced" controller="ephemeral"
	I0510 19:14:45.491978       1 shared_informer.go:357] "Caches are synced" controller="ReplicaSet"
	I0510 19:14:45.509292       1 shared_informer.go:357] "Caches are synced" controller="disruption"
	I0510 19:14:45.523087       1 shared_informer.go:357] "Caches are synced" controller="deployment"
	I0510 19:14:45.603642       1 shared_informer.go:357] "Caches are synced" controller="endpoint_slice"
	I0510 19:14:45.615487       1 shared_informer.go:357] "Caches are synced" controller="endpoint_slice_mirroring"
	I0510 19:14:45.687623       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 19:14:45.696762       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 19:14:46.129280       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0510 19:14:46.130590       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0510 19:14:46.130630       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0510 19:14:46.130641       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [2014853cb47d41b0658ce88cece8340e54d300cc95cc1ee4c8b1c6164a3e0fd4] <==
	E0510 19:14:43.844815       1 proxier.go:732] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0510 19:14:43.861147       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.39.10"]
	E0510 19:14:43.861233       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0510 19:14:43.978934       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0510 19:14:43.978972       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0510 19:14:43.979114       1 server_linux.go:145] "Using iptables Proxier"
	I0510 19:14:44.026322       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0510 19:14:44.026678       1 server.go:516] "Version info" version="v1.33.0"
	I0510 19:14:44.026700       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 19:14:44.044281       1 config.go:199] "Starting service config controller"
	I0510 19:14:44.044302       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0510 19:14:44.044329       1 config.go:105] "Starting endpoint slice config controller"
	I0510 19:14:44.044334       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0510 19:14:44.044350       1 config.go:440] "Starting serviceCIDR config controller"
	I0510 19:14:44.044356       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0510 19:14:44.044393       1 config.go:329] "Starting node config controller"
	I0510 19:14:44.044398       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0510 19:14:44.144942       1 shared_informer.go:357] "Caches are synced" controller="node config"
	I0510 19:14:44.144990       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0510 19:14:44.145026       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0510 19:14:44.145753       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [c9945e09f6b6d261039029f89e168d45dd9fc4acf65b417b637a7704d3cc6df5] <==
	E0510 19:13:02.693457       1 proxier.go:732] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0510 19:13:02.758813       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.39.10"]
	E0510 19:13:02.759264       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0510 19:13:02.813058       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0510 19:13:02.813121       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0510 19:13:02.813154       1 server_linux.go:145] "Using iptables Proxier"
	I0510 19:13:02.823619       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0510 19:13:02.825210       1 server.go:516] "Version info" version="v1.33.0"
	I0510 19:13:02.825246       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 19:13:02.830903       1 config.go:199] "Starting service config controller"
	I0510 19:13:02.831362       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0510 19:13:02.831483       1 config.go:105] "Starting endpoint slice config controller"
	I0510 19:13:02.831573       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0510 19:13:02.831604       1 config.go:440] "Starting serviceCIDR config controller"
	I0510 19:13:02.831609       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0510 19:13:02.836974       1 config.go:329] "Starting node config controller"
	I0510 19:13:02.837043       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0510 19:13:02.931963       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	I0510 19:13:02.932004       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0510 19:13:02.934797       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0510 19:13:02.939205       1 shared_informer.go:357] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [3df19d449ad869e6c3b02da7edbd6ffb58d12f2d727f816ea15867bd4aa08d16] <==
	I0510 19:14:40.405840       1 serving.go:386] Generated self-signed cert in-memory
	W0510 19:14:41.944865       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0510 19:14:41.945157       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0510 19:14:41.945301       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0510 19:14:41.945327       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0510 19:14:42.013382       1 server.go:171] "Starting Kubernetes Scheduler" version="v1.33.0"
	I0510 19:14:42.013480       1 server.go:173] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 19:14:42.016453       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 19:14:42.016527       1 shared_informer.go:350] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 19:14:42.017173       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0510 19:14:42.017686       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0510 19:14:42.116998       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [b96ba0c6867681eb5c3dd0df167dc56dd09ffcb675f8fa26472566e54feb7385] <==
	E0510 19:12:52.728528       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0510 19:12:52.728590       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0510 19:12:52.728664       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0510 19:12:52.730336       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0510 19:12:52.730442       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0510 19:12:52.730507       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0510 19:12:52.730578       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0510 19:12:52.730615       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0510 19:12:52.730681       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0510 19:12:53.544505       1 reflector.go:200] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0510 19:12:53.550114       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0510 19:12:53.624424       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0510 19:12:53.688437       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0510 19:12:53.696486       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0510 19:12:53.741325       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0510 19:12:53.826290       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0510 19:12:53.904254       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0510 19:12:53.968405       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0510 19:12:53.979343       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0510 19:12:53.989836       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0510 19:12:54.009435       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0510 19:12:54.034193       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0510 19:12:54.045091       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I0510 19:12:56.911917       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0510 19:14:25.461949       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 10 19:14:40 pause-317241 kubelet[3473]: E0510 19:14:40.233692    3473 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-317241\" not found" node="pause-317241"
	May 10 19:14:41 pause-317241 kubelet[3473]: E0510 19:14:41.237427    3473 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-317241\" not found" node="pause-317241"
	May 10 19:14:41 pause-317241 kubelet[3473]: E0510 19:14:41.238392    3473 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-317241\" not found" node="pause-317241"
	May 10 19:14:41 pause-317241 kubelet[3473]: E0510 19:14:41.239007    3473 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-317241\" not found" node="pause-317241"
	May 10 19:14:41 pause-317241 kubelet[3473]: I0510 19:14:41.995801    3473 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-317241"
	May 10 19:14:42 pause-317241 kubelet[3473]: I0510 19:14:42.089038    3473 kubelet_node_status.go:124] "Node was previously registered" node="pause-317241"
	May 10 19:14:42 pause-317241 kubelet[3473]: I0510 19:14:42.089243    3473 kubelet_node_status.go:78] "Successfully registered node" node="pause-317241"
	May 10 19:14:42 pause-317241 kubelet[3473]: I0510 19:14:42.089304    3473 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 10 19:14:42 pause-317241 kubelet[3473]: I0510 19:14:42.090649    3473 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 10 19:14:42 pause-317241 kubelet[3473]: E0510 19:14:42.126649    3473 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-317241\" already exists" pod="kube-system/kube-apiserver-pause-317241"
	May 10 19:14:42 pause-317241 kubelet[3473]: I0510 19:14:42.126786    3473 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-317241"
	May 10 19:14:42 pause-317241 kubelet[3473]: E0510 19:14:42.151597    3473 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-317241\" already exists" pod="kube-system/kube-controller-manager-pause-317241"
	May 10 19:14:42 pause-317241 kubelet[3473]: I0510 19:14:42.151861    3473 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-317241"
	May 10 19:14:42 pause-317241 kubelet[3473]: E0510 19:14:42.166899    3473 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-317241\" already exists" pod="kube-system/kube-scheduler-pause-317241"
	May 10 19:14:42 pause-317241 kubelet[3473]: I0510 19:14:42.167085    3473 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-317241"
	May 10 19:14:42 pause-317241 kubelet[3473]: E0510 19:14:42.183352    3473 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"etcd-pause-317241\" already exists" pod="kube-system/etcd-pause-317241"
	May 10 19:14:42 pause-317241 kubelet[3473]: I0510 19:14:42.848797    3473 apiserver.go:52] "Watching apiserver"
	May 10 19:14:42 pause-317241 kubelet[3473]: I0510 19:14:42.895374    3473 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world"
	May 10 19:14:42 pause-317241 kubelet[3473]: I0510 19:14:42.996046    3473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/08543e5b-1085-4de5-9922-16d2a027fb0e-xtables-lock\") pod \"kube-proxy-skvbp\" (UID: \"08543e5b-1085-4de5-9922-16d2a027fb0e\") " pod="kube-system/kube-proxy-skvbp"
	May 10 19:14:42 pause-317241 kubelet[3473]: I0510 19:14:42.996502    3473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/08543e5b-1085-4de5-9922-16d2a027fb0e-lib-modules\") pod \"kube-proxy-skvbp\" (UID: \"08543e5b-1085-4de5-9922-16d2a027fb0e\") " pod="kube-system/kube-proxy-skvbp"
	May 10 19:14:46 pause-317241 kubelet[3473]: I0510 19:14:46.604017    3473 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	May 10 19:14:48 pause-317241 kubelet[3473]: E0510 19:14:48.062061    3473 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746904488061659268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125819,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 19:14:48 pause-317241 kubelet[3473]: E0510 19:14:48.062259    3473 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746904488061659268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125819,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 19:14:58 pause-317241 kubelet[3473]: E0510 19:14:58.064915    3473 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746904498064385504,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125819,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 19:14:58 pause-317241 kubelet[3473]: E0510 19:14:58.064959    3473 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746904498064385504,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125819,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-317241 -n pause-317241
helpers_test.go:261: (dbg) Run:  kubectl --context pause-317241 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-317241 -n pause-317241
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-317241 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-317241 logs -n 25: (3.507724606s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-380533 sudo              | cilium-380533             | jenkins | v1.35.0 | 10 May 25 19:10 UTC |                     |
	|         | containerd config dump             |                           |         |         |                     |                     |
	| ssh     | -p cilium-380533 sudo              | cilium-380533             | jenkins | v1.35.0 | 10 May 25 19:10 UTC |                     |
	|         | systemctl status crio --all        |                           |         |         |                     |                     |
	|         | --full --no-pager                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-380533 sudo              | cilium-380533             | jenkins | v1.35.0 | 10 May 25 19:10 UTC |                     |
	|         | systemctl cat crio --no-pager      |                           |         |         |                     |                     |
	| ssh     | -p cilium-380533 sudo find         | cilium-380533             | jenkins | v1.35.0 | 10 May 25 19:10 UTC |                     |
	|         | /etc/crio -type f -exec sh -c      |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;               |                           |         |         |                     |                     |
	| ssh     | -p cilium-380533 sudo crio         | cilium-380533             | jenkins | v1.35.0 | 10 May 25 19:10 UTC |                     |
	|         | config                             |                           |         |         |                     |                     |
	| delete  | -p cilium-380533                   | cilium-380533             | jenkins | v1.35.0 | 10 May 25 19:10 UTC | 10 May 25 19:10 UTC |
	| start   | -p kubernetes-upgrade-517660       | kubernetes-upgrade-517660 | jenkins | v1.35.0 | 10 May 25 19:10 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0       |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-065180             | NoKubernetes-065180       | jenkins | v1.35.0 | 10 May 25 19:11 UTC | 10 May 25 19:12 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p offline-crio-031624             | offline-crio-031624       | jenkins | v1.35.0 | 10 May 25 19:11 UTC | 10 May 25 19:11 UTC |
	| start   | -p pause-317241 --memory=2048      | pause-317241              | jenkins | v1.35.0 | 10 May 25 19:11 UTC | 10 May 25 19:13 UTC |
	|         | --install-addons=false             |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p running-upgrade-085041          | running-upgrade-085041    | jenkins | v1.35.0 | 10 May 25 19:12 UTC | 10 May 25 19:13 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-065180             | NoKubernetes-065180       | jenkins | v1.35.0 | 10 May 25 19:12 UTC | 10 May 25 19:12 UTC |
	| start   | -p NoKubernetes-065180             | NoKubernetes-065180       | jenkins | v1.35.0 | 10 May 25 19:12 UTC | 10 May 25 19:13 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-065180 sudo        | NoKubernetes-065180       | jenkins | v1.35.0 | 10 May 25 19:13 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-065180             | NoKubernetes-065180       | jenkins | v1.35.0 | 10 May 25 19:13 UTC | 10 May 25 19:13 UTC |
	| delete  | -p running-upgrade-085041          | running-upgrade-085041    | jenkins | v1.35.0 | 10 May 25 19:13 UTC | 10 May 25 19:13 UTC |
	| start   | -p NoKubernetes-065180             | NoKubernetes-065180       | jenkins | v1.35.0 | 10 May 25 19:13 UTC | 10 May 25 19:14 UTC |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p force-systemd-flag-525854       | force-systemd-flag-525854 | jenkins | v1.35.0 | 10 May 25 19:13 UTC | 10 May 25 19:14 UTC |
	|         | --memory=2048 --force-systemd      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p pause-317241                    | pause-317241              | jenkins | v1.35.0 | 10 May 25 19:13 UTC | 10 May 25 19:14 UTC |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-065180 sudo        | NoKubernetes-065180       | jenkins | v1.35.0 | 10 May 25 19:14 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-065180             | NoKubernetes-065180       | jenkins | v1.35.0 | 10 May 25 19:14 UTC | 10 May 25 19:14 UTC |
	| start   | -p force-systemd-env-429136        | force-systemd-env-429136  | jenkins | v1.35.0 | 10 May 25 19:14 UTC |                     |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-525854 ssh cat  | force-systemd-flag-525854 | jenkins | v1.35.0 | 10 May 25 19:14 UTC | 10 May 25 19:14 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-525854       | force-systemd-flag-525854 | jenkins | v1.35.0 | 10 May 25 19:14 UTC | 10 May 25 19:14 UTC |
	| start   | -p cert-expiration-355262          | cert-expiration-355262    | jenkins | v1.35.0 | 10 May 25 19:14 UTC |                     |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --cert-expiration=3m               |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/05/10 19:14:48
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0510 19:14:48.172142  438986 out.go:345] Setting OutFile to fd 1 ...
	I0510 19:14:48.172255  438986 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 19:14:48.172259  438986 out.go:358] Setting ErrFile to fd 2...
	I0510 19:14:48.172262  438986 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 19:14:48.172450  438986 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-388787/.minikube/bin
	I0510 19:14:48.173059  438986 out.go:352] Setting JSON to false
	I0510 19:14:48.174106  438986 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":32236,"bootTime":1746872252,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 19:14:48.174167  438986 start.go:140] virtualization: kvm guest
	I0510 19:14:48.177051  438986 out.go:177] * [cert-expiration-355262] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 19:14:48.178701  438986 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 19:14:48.178717  438986 notify.go:220] Checking for updates...
	I0510 19:14:48.181638  438986 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 19:14:48.183032  438986 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-388787/kubeconfig
	I0510 19:14:48.184383  438986 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-388787/.minikube
	I0510 19:14:48.185734  438986 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 19:14:48.187336  438986 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 19:14:48.189574  438986 config.go:182] Loaded profile config "force-systemd-env-429136": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 19:14:48.189716  438986 config.go:182] Loaded profile config "kubernetes-upgrade-517660": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0510 19:14:48.189893  438986 config.go:182] Loaded profile config "pause-317241": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 19:14:48.190033  438986 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 19:14:48.228302  438986 out.go:177] * Using the kvm2 driver based on user configuration
	I0510 19:14:48.229715  438986 start.go:304] selected driver: kvm2
	I0510 19:14:48.229735  438986 start.go:908] validating driver "kvm2" against <nil>
	I0510 19:14:48.229748  438986 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 19:14:48.230464  438986 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 19:14:48.230547  438986 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20720-388787/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0510 19:14:48.246819  438986 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0510 19:14:48.246879  438986 start_flags.go:311] no existing cluster config was found, will generate one from the flags 
	I0510 19:14:48.247120  438986 start_flags.go:957] Wait components to verify : map[apiserver:true system_pods:true]
	I0510 19:14:48.247139  438986 cni.go:84] Creating CNI manager for ""
	I0510 19:14:48.247180  438986 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0510 19:14:48.247187  438986 start_flags.go:320] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0510 19:14:48.247257  438986 start.go:347] cluster config:
	{Name:cert-expiration-355262 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:cert-expiration-355262 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 19:14:48.247361  438986 iso.go:125] acquiring lock: {Name:mk19640015999219180c6685480547adf0c02201 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 19:14:48.249551  438986 out.go:177] * Starting "cert-expiration-355262" primary control-plane node in "cert-expiration-355262" cluster
	I0510 19:14:44.775720  438136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 19:14:44.808812  438136 node_ready.go:35] waiting up to 6m0s for node "pause-317241" to be "Ready" ...
	I0510 19:14:44.817862  438136 node_ready.go:49] node "pause-317241" is "Ready"
	I0510 19:14:44.817900  438136 node_ready.go:38] duration metric: took 9.042599ms for node "pause-317241" to be "Ready" ...
	I0510 19:14:44.817912  438136 api_server.go:52] waiting for apiserver process to appear ...
	I0510 19:14:44.817970  438136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:14:44.847393  438136 api_server.go:72] duration metric: took 442.45631ms to wait for apiserver process to appear ...
	I0510 19:14:44.847424  438136 api_server.go:88] waiting for apiserver healthz status ...
	I0510 19:14:44.847444  438136 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0510 19:14:44.858555  438136 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I0510 19:14:44.859684  438136 api_server.go:141] control plane version: v1.33.0
	I0510 19:14:44.859707  438136 api_server.go:131] duration metric: took 12.277046ms to wait for apiserver health ...
	I0510 19:14:44.859715  438136 system_pods.go:43] waiting for kube-system pods to appear ...
	I0510 19:14:44.862773  438136 system_pods.go:59] 6 kube-system pods found
	I0510 19:14:44.862817  438136 system_pods.go:61] "coredns-674b8bbfcf-2cc2n" [c1ecbdbb-8d9b-4ecf-a9a2-94d3478e1128] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0510 19:14:44.862830  438136 system_pods.go:61] "etcd-pause-317241" [139a211d-954b-48b1-9d06-04930cbae3ef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0510 19:14:44.862844  438136 system_pods.go:61] "kube-apiserver-pause-317241" [6df56230-68a6-49e8-8fa0-9de8dcea547a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0510 19:14:44.862858  438136 system_pods.go:61] "kube-controller-manager-pause-317241" [90a731bf-4486-4c65-b1b0-b502df8db86f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0510 19:14:44.862865  438136 system_pods.go:61] "kube-proxy-skvbp" [08543e5b-1085-4de5-9922-16d2a027fb0e] Running
	I0510 19:14:44.862883  438136 system_pods.go:61] "kube-scheduler-pause-317241" [f0725bb2-7a49-4852-a9aa-8f03137243a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0510 19:14:44.862896  438136 system_pods.go:74] duration metric: took 3.17322ms to wait for pod list to return data ...
	I0510 19:14:44.862914  438136 default_sa.go:34] waiting for default service account to be created ...
	I0510 19:14:44.871600  438136 default_sa.go:45] found service account: "default"
	I0510 19:14:44.871633  438136 default_sa.go:55] duration metric: took 8.710698ms for default service account to be created ...
	I0510 19:14:44.871647  438136 system_pods.go:116] waiting for k8s-apps to be running ...
	I0510 19:14:44.876046  438136 system_pods.go:86] 6 kube-system pods found
	I0510 19:14:44.876093  438136 system_pods.go:89] "coredns-674b8bbfcf-2cc2n" [c1ecbdbb-8d9b-4ecf-a9a2-94d3478e1128] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0510 19:14:44.876114  438136 system_pods.go:89] "etcd-pause-317241" [139a211d-954b-48b1-9d06-04930cbae3ef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0510 19:14:44.876127  438136 system_pods.go:89] "kube-apiserver-pause-317241" [6df56230-68a6-49e8-8fa0-9de8dcea547a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0510 19:14:44.876138  438136 system_pods.go:89] "kube-controller-manager-pause-317241" [90a731bf-4486-4c65-b1b0-b502df8db86f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0510 19:14:44.876145  438136 system_pods.go:89] "kube-proxy-skvbp" [08543e5b-1085-4de5-9922-16d2a027fb0e] Running
	I0510 19:14:44.876156  438136 system_pods.go:89] "kube-scheduler-pause-317241" [f0725bb2-7a49-4852-a9aa-8f03137243a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0510 19:14:44.876172  438136 system_pods.go:126] duration metric: took 4.517064ms to wait for k8s-apps to be running ...
	I0510 19:14:44.876188  438136 system_svc.go:44] waiting for kubelet service to be running ....
	I0510 19:14:44.876258  438136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0510 19:14:44.898086  438136 system_svc.go:56] duration metric: took 21.884327ms WaitForService to wait for kubelet
	I0510 19:14:44.898192  438136 kubeadm.go:578] duration metric: took 493.259009ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0510 19:14:44.898239  438136 node_conditions.go:102] verifying NodePressure condition ...
	I0510 19:14:44.901896  438136 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0510 19:14:44.901931  438136 node_conditions.go:123] node cpu capacity is 2
	I0510 19:14:44.901948  438136 node_conditions.go:105] duration metric: took 3.689961ms to run NodePressure ...
	I0510 19:14:44.901964  438136 start.go:241] waiting for startup goroutines ...
	I0510 19:14:44.901975  438136 start.go:246] waiting for cluster config update ...
	I0510 19:14:44.901985  438136 start.go:255] writing updated cluster config ...
	I0510 19:14:44.902353  438136 ssh_runner.go:195] Run: rm -f paused
	I0510 19:14:44.909805  438136 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0510 19:14:44.910403  438136 kapi.go:59] client config for pause-317241: &rest.Config{Host:"https://192.168.39.10:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20720-388787/.minikube/profiles/pause-317241/client.crt", KeyFile:"/home/jenkins/minikube-integration/20720-388787/.minikube/profiles/pause-317241/client.key", CAFile:"/home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]
string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24b3a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0510 19:14:44.917353  438136 pod_ready.go:83] waiting for pod "coredns-674b8bbfcf-2cc2n" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:14:46.925690  438136 pod_ready.go:94] pod "coredns-674b8bbfcf-2cc2n" is "Ready"
	I0510 19:14:46.925745  438136 pod_ready.go:86] duration metric: took 2.008365481s for pod "coredns-674b8bbfcf-2cc2n" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:14:46.929701  438136 pod_ready.go:83] waiting for pod "etcd-pause-317241" in "kube-system" namespace to be "Ready" or be gone ...
	W0510 19:14:48.936817  438136 pod_ready.go:104] pod "etcd-pause-317241" is not "Ready", error: <nil>
	I0510 19:14:49.454736  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:49.455266  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | unable to find current IP address of domain force-systemd-env-429136 in network mk-force-systemd-env-429136
	I0510 19:14:49.455331  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | I0510 19:14:49.455240  438695 retry.go:31] will retry after 4.989984529s: waiting for domain to come up
	I0510 19:14:48.250955  438986 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
	I0510 19:14:48.250996  438986 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4
	I0510 19:14:48.251002  438986 cache.go:56] Caching tarball of preloaded images
	I0510 19:14:48.251079  438986 preload.go:172] Found /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0510 19:14:48.251085  438986 cache.go:59] Finished verifying existence of preloaded tar for v1.33.0 on crio
	I0510 19:14:48.251175  438986 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/cert-expiration-355262/config.json ...
	I0510 19:14:48.251187  438986 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/cert-expiration-355262/config.json: {Name:mkdd89f8ab0eb265ffaad36dbc023be1371ef075 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:14:48.251370  438986 start.go:360] acquireMachinesLock for cert-expiration-355262: {Name:mk11499d7756d503a7a24339ad1a7f9ab9dc0fab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	W0510 19:14:51.435421  438136 pod_ready.go:104] pod "etcd-pause-317241" is not "Ready", error: <nil>
	W0510 19:14:53.436710  438136 pod_ready.go:104] pod "etcd-pause-317241" is not "Ready", error: <nil>
	I0510 19:14:53.141686  435640 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0510 19:14:53.141890  435640 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:14:53.142148  435640 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:14:56.045727  438986 start.go:364] duration metric: took 7.794285786s to acquireMachinesLock for "cert-expiration-355262"
	I0510 19:14:56.045796  438986 start.go:93] Provisioning new machine with config: &{Name:cert-expiration-355262 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.33.0 ClusterName:cert-expiration-355262 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0510 19:14:56.045905  438986 start.go:125] createHost starting for "" (driver="kvm2")
	I0510 19:14:54.448458  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:54.449128  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has current primary IP address 192.168.50.10 and MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:54.449154  438479 main.go:141] libmachine: (force-systemd-env-429136) found domain IP: 192.168.50.10
	I0510 19:14:54.449168  438479 main.go:141] libmachine: (force-systemd-env-429136) reserving static IP address...
	I0510 19:14:54.449941  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | unable to find host DHCP lease matching {name: "force-systemd-env-429136", mac: "52:54:00:73:6a:05", ip: "192.168.50.10"} in network mk-force-systemd-env-429136
	I0510 19:14:54.544475  438479 main.go:141] libmachine: (force-systemd-env-429136) reserved static IP address 192.168.50.10 for domain force-systemd-env-429136
	I0510 19:14:54.544503  438479 main.go:141] libmachine: (force-systemd-env-429136) waiting for SSH...
	I0510 19:14:54.544543  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | Getting to WaitForSSH function...
	I0510 19:14:54.547653  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:54.548083  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6a:05", ip: ""} in network mk-force-systemd-env-429136: {Iface:virbr2 ExpiryTime:2025-05-10 20:14:49 +0000 UTC Type:0 Mac:52:54:00:73:6a:05 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:minikube Clientid:01:52:54:00:73:6a:05}
	I0510 19:14:54.548110  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined IP address 192.168.50.10 and MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:54.548244  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | Using SSH client type: external
	I0510 19:14:54.548299  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | Using SSH private key: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/force-systemd-env-429136/id_rsa (-rw-------)
	I0510 19:14:54.548357  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20720-388787/.minikube/machines/force-systemd-env-429136/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0510 19:14:54.548376  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | About to run SSH command:
	I0510 19:14:54.548387  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | exit 0
	I0510 19:14:54.676569  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | SSH cmd err, output: <nil>: 
	I0510 19:14:54.676872  438479 main.go:141] libmachine: (force-systemd-env-429136) KVM machine creation complete
	I0510 19:14:54.677308  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetConfigRaw
	I0510 19:14:54.678072  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .DriverName
	I0510 19:14:54.678302  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .DriverName
	I0510 19:14:54.678493  438479 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0510 19:14:54.678513  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetState
	I0510 19:14:54.680103  438479 main.go:141] libmachine: Detecting operating system of created instance...
	I0510 19:14:54.680121  438479 main.go:141] libmachine: Waiting for SSH to be available...
	I0510 19:14:54.680128  438479 main.go:141] libmachine: Getting to WaitForSSH function...
	I0510 19:14:54.680137  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHHostname
	I0510 19:14:54.683967  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:54.684415  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6a:05", ip: ""} in network mk-force-systemd-env-429136: {Iface:virbr2 ExpiryTime:2025-05-10 20:14:49 +0000 UTC Type:0 Mac:52:54:00:73:6a:05 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:force-systemd-env-429136 Clientid:01:52:54:00:73:6a:05}
	I0510 19:14:54.684446  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined IP address 192.168.50.10 and MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:54.684621  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHPort
	I0510 19:14:54.684818  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHKeyPath
	I0510 19:14:54.684993  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHKeyPath
	I0510 19:14:54.685173  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHUsername
	I0510 19:14:54.685348  438479 main.go:141] libmachine: Using SSH client type: native
	I0510 19:14:54.685756  438479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0510 19:14:54.685780  438479 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0510 19:14:54.795384  438479 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0510 19:14:54.795411  438479 main.go:141] libmachine: Detecting the provisioner...
	I0510 19:14:54.795420  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHHostname
	I0510 19:14:54.798774  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:54.799299  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6a:05", ip: ""} in network mk-force-systemd-env-429136: {Iface:virbr2 ExpiryTime:2025-05-10 20:14:49 +0000 UTC Type:0 Mac:52:54:00:73:6a:05 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:force-systemd-env-429136 Clientid:01:52:54:00:73:6a:05}
	I0510 19:14:54.799338  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined IP address 192.168.50.10 and MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:54.799539  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHPort
	I0510 19:14:54.799846  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHKeyPath
	I0510 19:14:54.800158  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHKeyPath
	I0510 19:14:54.800412  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHUsername
	I0510 19:14:54.800675  438479 main.go:141] libmachine: Using SSH client type: native
	I0510 19:14:54.800998  438479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0510 19:14:54.801014  438479 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0510 19:14:54.913550  438479 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2024.11.2-dirty
	ID=buildroot
	VERSION_ID=2024.11.2
	PRETTY_NAME="Buildroot 2024.11.2"
	
	I0510 19:14:54.913704  438479 main.go:141] libmachine: found compatible host: buildroot
	I0510 19:14:54.913715  438479 main.go:141] libmachine: Provisioning with buildroot...
	I0510 19:14:54.913723  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetMachineName
	I0510 19:14:54.914084  438479 buildroot.go:166] provisioning hostname "force-systemd-env-429136"
	I0510 19:14:54.914115  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetMachineName
	I0510 19:14:54.914289  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHHostname
	I0510 19:14:54.917302  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:54.917825  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6a:05", ip: ""} in network mk-force-systemd-env-429136: {Iface:virbr2 ExpiryTime:2025-05-10 20:14:49 +0000 UTC Type:0 Mac:52:54:00:73:6a:05 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:force-systemd-env-429136 Clientid:01:52:54:00:73:6a:05}
	I0510 19:14:54.917865  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined IP address 192.168.50.10 and MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:54.918094  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHPort
	I0510 19:14:54.918378  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHKeyPath
	I0510 19:14:54.918670  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHKeyPath
	I0510 19:14:54.918917  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHUsername
	I0510 19:14:54.919126  438479 main.go:141] libmachine: Using SSH client type: native
	I0510 19:14:54.919414  438479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0510 19:14:54.919432  438479 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-429136 && echo "force-systemd-env-429136" | sudo tee /etc/hostname
	I0510 19:14:55.051705  438479 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-429136
	
	I0510 19:14:55.051735  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHHostname
	I0510 19:14:55.054980  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:55.055378  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6a:05", ip: ""} in network mk-force-systemd-env-429136: {Iface:virbr2 ExpiryTime:2025-05-10 20:14:49 +0000 UTC Type:0 Mac:52:54:00:73:6a:05 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:force-systemd-env-429136 Clientid:01:52:54:00:73:6a:05}
	I0510 19:14:55.055410  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined IP address 192.168.50.10 and MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:55.055671  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHPort
	I0510 19:14:55.055933  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHKeyPath
	I0510 19:14:55.056131  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHKeyPath
	I0510 19:14:55.056292  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHUsername
	I0510 19:14:55.056449  438479 main.go:141] libmachine: Using SSH client type: native
	I0510 19:14:55.056654  438479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0510 19:14:55.056671  438479 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-429136' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-429136/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-429136' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0510 19:14:55.179121  438479 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0510 19:14:55.179165  438479 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20720-388787/.minikube CaCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20720-388787/.minikube}
	I0510 19:14:55.179189  438479 buildroot.go:174] setting up certificates
	I0510 19:14:55.179210  438479 provision.go:84] configureAuth start
	I0510 19:14:55.179224  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetMachineName
	I0510 19:14:55.179556  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetIP
	I0510 19:14:55.182772  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:55.183180  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6a:05", ip: ""} in network mk-force-systemd-env-429136: {Iface:virbr2 ExpiryTime:2025-05-10 20:14:49 +0000 UTC Type:0 Mac:52:54:00:73:6a:05 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:force-systemd-env-429136 Clientid:01:52:54:00:73:6a:05}
	I0510 19:14:55.183221  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined IP address 192.168.50.10 and MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:55.183361  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHHostname
	I0510 19:14:55.185826  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:55.186207  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6a:05", ip: ""} in network mk-force-systemd-env-429136: {Iface:virbr2 ExpiryTime:2025-05-10 20:14:49 +0000 UTC Type:0 Mac:52:54:00:73:6a:05 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:force-systemd-env-429136 Clientid:01:52:54:00:73:6a:05}
	I0510 19:14:55.186246  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined IP address 192.168.50.10 and MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:55.186452  438479 provision.go:143] copyHostCerts
	I0510 19:14:55.186492  438479 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem
	I0510 19:14:55.186546  438479 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem, removing ...
	I0510 19:14:55.186567  438479 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem
	I0510 19:14:55.186642  438479 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem (1078 bytes)
	I0510 19:14:55.186768  438479 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem
	I0510 19:14:55.186796  438479 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem, removing ...
	I0510 19:14:55.186804  438479 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem
	I0510 19:14:55.186840  438479 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem (1123 bytes)
	I0510 19:14:55.186924  438479 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem
	I0510 19:14:55.186949  438479 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem, removing ...
	I0510 19:14:55.186956  438479 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem
	I0510 19:14:55.186986  438479 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem (1675 bytes)
	I0510 19:14:55.187070  438479 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-429136 san=[127.0.0.1 192.168.50.10 force-systemd-env-429136 localhost minikube]
	I0510 19:14:55.317914  438479 provision.go:177] copyRemoteCerts
	I0510 19:14:55.318034  438479 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0510 19:14:55.318075  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHHostname
	I0510 19:14:55.322034  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:55.322556  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6a:05", ip: ""} in network mk-force-systemd-env-429136: {Iface:virbr2 ExpiryTime:2025-05-10 20:14:49 +0000 UTC Type:0 Mac:52:54:00:73:6a:05 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:force-systemd-env-429136 Clientid:01:52:54:00:73:6a:05}
	I0510 19:14:55.322626  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined IP address 192.168.50.10 and MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:55.322910  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHPort
	I0510 19:14:55.323167  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHKeyPath
	I0510 19:14:55.323414  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHUsername
	I0510 19:14:55.323628  438479 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/force-systemd-env-429136/id_rsa Username:docker}
	I0510 19:14:55.412415  438479 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0510 19:14:55.412500  438479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0510 19:14:55.448063  438479 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0510 19:14:55.448158  438479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0510 19:14:55.481600  438479 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0510 19:14:55.481696  438479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0510 19:14:55.513300  438479 provision.go:87] duration metric: took 334.074855ms to configureAuth
	I0510 19:14:55.513333  438479 buildroot.go:189] setting minikube options for container-runtime
	I0510 19:14:55.513511  438479 config.go:182] Loaded profile config "force-systemd-env-429136": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 19:14:55.513593  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHHostname
	I0510 19:14:55.516691  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:55.517048  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6a:05", ip: ""} in network mk-force-systemd-env-429136: {Iface:virbr2 ExpiryTime:2025-05-10 20:14:49 +0000 UTC Type:0 Mac:52:54:00:73:6a:05 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:force-systemd-env-429136 Clientid:01:52:54:00:73:6a:05}
	I0510 19:14:55.517097  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined IP address 192.168.50.10 and MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:55.517306  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHPort
	I0510 19:14:55.517508  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHKeyPath
	I0510 19:14:55.517664  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHKeyPath
	I0510 19:14:55.517818  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHUsername
	I0510 19:14:55.517991  438479 main.go:141] libmachine: Using SSH client type: native
	I0510 19:14:55.518225  438479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0510 19:14:55.518249  438479 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0510 19:14:55.767149  438479 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0510 19:14:55.767186  438479 main.go:141] libmachine: Checking connection to Docker...
	I0510 19:14:55.767199  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetURL
	I0510 19:14:55.768932  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | using libvirt version 6000000
	I0510 19:14:55.771734  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:55.772197  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6a:05", ip: ""} in network mk-force-systemd-env-429136: {Iface:virbr2 ExpiryTime:2025-05-10 20:14:49 +0000 UTC Type:0 Mac:52:54:00:73:6a:05 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:force-systemd-env-429136 Clientid:01:52:54:00:73:6a:05}
	I0510 19:14:55.772232  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined IP address 192.168.50.10 and MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:55.772439  438479 main.go:141] libmachine: Docker is up and running!
	I0510 19:14:55.772460  438479 main.go:141] libmachine: Reticulating splines...
	I0510 19:14:55.772471  438479 client.go:171] duration metric: took 23.964486496s to LocalClient.Create
	I0510 19:14:55.772503  438479 start.go:167] duration metric: took 23.964562021s to libmachine.API.Create "force-systemd-env-429136"
	I0510 19:14:55.772513  438479 start.go:293] postStartSetup for "force-systemd-env-429136" (driver="kvm2")
	I0510 19:14:55.772526  438479 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0510 19:14:55.772564  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .DriverName
	I0510 19:14:55.772895  438479 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0510 19:14:55.772946  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHHostname
	I0510 19:14:55.775991  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:55.776373  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6a:05", ip: ""} in network mk-force-systemd-env-429136: {Iface:virbr2 ExpiryTime:2025-05-10 20:14:49 +0000 UTC Type:0 Mac:52:54:00:73:6a:05 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:force-systemd-env-429136 Clientid:01:52:54:00:73:6a:05}
	I0510 19:14:55.776405  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined IP address 192.168.50.10 and MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:55.776568  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHPort
	I0510 19:14:55.776770  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHKeyPath
	I0510 19:14:55.776967  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHUsername
	I0510 19:14:55.777125  438479 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/force-systemd-env-429136/id_rsa Username:docker}
	I0510 19:14:55.868379  438479 ssh_runner.go:195] Run: cat /etc/os-release
	I0510 19:14:55.873883  438479 info.go:137] Remote host: Buildroot 2024.11.2
	I0510 19:14:55.873924  438479 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-388787/.minikube/addons for local assets ...
	I0510 19:14:55.874036  438479 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-388787/.minikube/files for local assets ...
	I0510 19:14:55.874126  438479 filesync.go:149] local asset: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem -> 3959802.pem in /etc/ssl/certs
	I0510 19:14:55.874137  438479 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem -> /etc/ssl/certs/3959802.pem
	I0510 19:14:55.874222  438479 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0510 19:14:55.887182  438479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem --> /etc/ssl/certs/3959802.pem (1708 bytes)
	I0510 19:14:55.918158  438479 start.go:296] duration metric: took 145.626627ms for postStartSetup
	I0510 19:14:55.918225  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetConfigRaw
	I0510 19:14:55.918985  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetIP
	I0510 19:14:55.922193  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:55.922631  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6a:05", ip: ""} in network mk-force-systemd-env-429136: {Iface:virbr2 ExpiryTime:2025-05-10 20:14:49 +0000 UTC Type:0 Mac:52:54:00:73:6a:05 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:force-systemd-env-429136 Clientid:01:52:54:00:73:6a:05}
	I0510 19:14:55.922667  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined IP address 192.168.50.10 and MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:55.922948  438479 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/force-systemd-env-429136/config.json ...
	I0510 19:14:55.923257  438479 start.go:128] duration metric: took 24.137801932s to createHost
	I0510 19:14:55.923296  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHHostname
	I0510 19:14:55.926652  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:55.927153  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6a:05", ip: ""} in network mk-force-systemd-env-429136: {Iface:virbr2 ExpiryTime:2025-05-10 20:14:49 +0000 UTC Type:0 Mac:52:54:00:73:6a:05 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:force-systemd-env-429136 Clientid:01:52:54:00:73:6a:05}
	I0510 19:14:55.927184  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined IP address 192.168.50.10 and MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:55.927407  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHPort
	I0510 19:14:55.927644  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHKeyPath
	I0510 19:14:55.927845  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHKeyPath
	I0510 19:14:55.928064  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHUsername
	I0510 19:14:55.928293  438479 main.go:141] libmachine: Using SSH client type: native
	I0510 19:14:55.928577  438479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0510 19:14:55.928603  438479 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0510 19:14:56.045470  438479 main.go:141] libmachine: SSH cmd err, output: <nil>: 1746904496.013750820
	
	I0510 19:14:56.045501  438479 fix.go:216] guest clock: 1746904496.013750820
	I0510 19:14:56.045510  438479 fix.go:229] Guest: 2025-05-10 19:14:56.01375082 +0000 UTC Remote: 2025-05-10 19:14:55.923274706 +0000 UTC m=+53.421171925 (delta=90.476114ms)
	I0510 19:14:56.045539  438479 fix.go:200] guest clock delta is within tolerance: 90.476114ms
	I0510 19:14:56.045547  438479 start.go:83] releasing machines lock for "force-systemd-env-429136", held for 24.260308883s
	I0510 19:14:56.045584  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .DriverName
	I0510 19:14:56.045969  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetIP
	I0510 19:14:56.049598  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:56.050169  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6a:05", ip: ""} in network mk-force-systemd-env-429136: {Iface:virbr2 ExpiryTime:2025-05-10 20:14:49 +0000 UTC Type:0 Mac:52:54:00:73:6a:05 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:force-systemd-env-429136 Clientid:01:52:54:00:73:6a:05}
	I0510 19:14:56.050219  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined IP address 192.168.50.10 and MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:56.050456  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .DriverName
	I0510 19:14:56.051127  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .DriverName
	I0510 19:14:56.051388  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .DriverName
	I0510 19:14:56.051510  438479 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0510 19:14:56.051561  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHHostname
	I0510 19:14:56.051718  438479 ssh_runner.go:195] Run: cat /version.json
	I0510 19:14:56.051754  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHHostname
	I0510 19:14:56.055032  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:56.055179  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:56.055471  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6a:05", ip: ""} in network mk-force-systemd-env-429136: {Iface:virbr2 ExpiryTime:2025-05-10 20:14:49 +0000 UTC Type:0 Mac:52:54:00:73:6a:05 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:force-systemd-env-429136 Clientid:01:52:54:00:73:6a:05}
	I0510 19:14:56.055502  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined IP address 192.168.50.10 and MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:56.055640  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHPort
	I0510 19:14:56.055671  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6a:05", ip: ""} in network mk-force-systemd-env-429136: {Iface:virbr2 ExpiryTime:2025-05-10 20:14:49 +0000 UTC Type:0 Mac:52:54:00:73:6a:05 Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:force-systemd-env-429136 Clientid:01:52:54:00:73:6a:05}
	I0510 19:14:56.055703  438479 main.go:141] libmachine: (force-systemd-env-429136) DBG | domain force-systemd-env-429136 has defined IP address 192.168.50.10 and MAC address 52:54:00:73:6a:05 in network mk-force-systemd-env-429136
	I0510 19:14:56.055858  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHKeyPath
	I0510 19:14:56.055917  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHPort
	I0510 19:14:56.056034  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHUsername
	I0510 19:14:56.056089  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHKeyPath
	I0510 19:14:56.056187  438479 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/force-systemd-env-429136/id_rsa Username:docker}
	I0510 19:14:56.056245  438479 main.go:141] libmachine: (force-systemd-env-429136) Calling .GetSSHUsername
	I0510 19:14:56.056350  438479 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/force-systemd-env-429136/id_rsa Username:docker}
	I0510 19:14:56.182883  438479 ssh_runner.go:195] Run: systemctl --version
	I0510 19:14:56.190481  438479 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0510 19:14:56.365502  438479 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0510 19:14:56.373679  438479 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0510 19:14:56.373798  438479 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0510 19:14:56.401720  438479 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0510 19:14:56.401757  438479 start.go:495] detecting cgroup driver to use...
	I0510 19:14:56.401782  438479 start.go:499] using "systemd" cgroup driver as enforced via flags
	I0510 19:14:56.401852  438479 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0510 19:14:56.421911  438479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0510 19:14:56.448206  438479 docker.go:225] disabling cri-docker service (if available) ...
	I0510 19:14:56.448309  438479 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0510 19:14:56.467186  438479 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0510 19:14:56.485155  438479 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0510 19:14:56.648898  438479 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0510 19:14:56.804830  438479 docker.go:241] disabling docker service ...
	I0510 19:14:56.804910  438479 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0510 19:14:56.828360  438479 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0510 19:14:56.846653  438479 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0510 19:14:57.057005  438479 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0510 19:14:57.223616  438479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0510 19:14:57.242019  438479 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0510 19:14:57.266512  438479 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0510 19:14:57.266610  438479 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:14:57.280442  438479 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0510 19:14:57.280556  438479 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:14:57.294568  438479 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:14:57.309110  438479 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:14:57.323359  438479 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0510 19:14:57.340438  438479 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:14:57.358828  438479 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:14:57.383071  438479 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:14:57.396699  438479 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0510 19:14:57.410766  438479 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0510 19:14:57.410854  438479 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0510 19:14:57.429895  438479 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0510 19:14:57.443195  438479 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 19:14:57.595025  438479 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0510 19:14:57.729093  438479 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0510 19:14:57.729180  438479 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0510 19:14:57.735100  438479 start.go:563] Will wait 60s for crictl version
	I0510 19:14:57.735176  438479 ssh_runner.go:195] Run: which crictl
	I0510 19:14:57.740583  438479 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0510 19:14:57.787499  438479 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0510 19:14:57.787616  438479 ssh_runner.go:195] Run: crio --version
	I0510 19:14:57.819784  438479 ssh_runner.go:195] Run: crio --version
	I0510 19:14:57.856202  438479 out.go:177] * Preparing Kubernetes v1.33.0 on CRI-O 1.29.1 ...
	I0510 19:14:56.048050  438986 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0510 19:14:56.048336  438986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:14:56.048415  438986 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:14:56.066612  438986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37639
	I0510 19:14:56.067368  438986 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:14:56.068238  438986 main.go:141] libmachine: Using API Version  1
	I0510 19:14:56.068285  438986 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:14:56.068783  438986 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:14:56.069031  438986 main.go:141] libmachine: (cert-expiration-355262) Calling .GetMachineName
	I0510 19:14:56.069222  438986 main.go:141] libmachine: (cert-expiration-355262) Calling .DriverName
	I0510 19:14:56.069396  438986 start.go:159] libmachine.API.Create for "cert-expiration-355262" (driver="kvm2")
	I0510 19:14:56.069428  438986 client.go:168] LocalClient.Create starting
	I0510 19:14:56.069459  438986 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem
	I0510 19:14:56.069507  438986 main.go:141] libmachine: Decoding PEM data...
	I0510 19:14:56.069531  438986 main.go:141] libmachine: Parsing certificate...
	I0510 19:14:56.069608  438986 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem
	I0510 19:14:56.069632  438986 main.go:141] libmachine: Decoding PEM data...
	I0510 19:14:56.069647  438986 main.go:141] libmachine: Parsing certificate...
	I0510 19:14:56.069710  438986 main.go:141] libmachine: Running pre-create checks...
	I0510 19:14:56.069720  438986 main.go:141] libmachine: (cert-expiration-355262) Calling .PreCreateCheck
	I0510 19:14:56.070181  438986 main.go:141] libmachine: (cert-expiration-355262) Calling .GetConfigRaw
	I0510 19:14:56.070735  438986 main.go:141] libmachine: Creating machine...
	I0510 19:14:56.070742  438986 main.go:141] libmachine: (cert-expiration-355262) Calling .Create
	I0510 19:14:56.070958  438986 main.go:141] libmachine: (cert-expiration-355262) creating KVM machine...
	I0510 19:14:56.070967  438986 main.go:141] libmachine: (cert-expiration-355262) creating network...
	I0510 19:14:56.072499  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | found existing default KVM network
	I0510 19:14:56.073524  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | I0510 19:14:56.073336  439059 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:91:b8:05} reservation:<nil>}
	I0510 19:14:56.074500  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | I0510 19:14:56.074373  439059 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:a8:ac:69} reservation:<nil>}
	I0510 19:14:56.075710  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | I0510 19:14:56.075575  439059 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003343d0}
	I0510 19:14:56.075720  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | created network xml: 
	I0510 19:14:56.075728  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | <network>
	I0510 19:14:56.075740  438986 main.go:141] libmachine: (cert-expiration-355262) DBG |   <name>mk-cert-expiration-355262</name>
	I0510 19:14:56.075745  438986 main.go:141] libmachine: (cert-expiration-355262) DBG |   <dns enable='no'/>
	I0510 19:14:56.075748  438986 main.go:141] libmachine: (cert-expiration-355262) DBG |   
	I0510 19:14:56.075754  438986 main.go:141] libmachine: (cert-expiration-355262) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0510 19:14:56.075758  438986 main.go:141] libmachine: (cert-expiration-355262) DBG |     <dhcp>
	I0510 19:14:56.075763  438986 main.go:141] libmachine: (cert-expiration-355262) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0510 19:14:56.075774  438986 main.go:141] libmachine: (cert-expiration-355262) DBG |     </dhcp>
	I0510 19:14:56.075778  438986 main.go:141] libmachine: (cert-expiration-355262) DBG |   </ip>
	I0510 19:14:56.075831  438986 main.go:141] libmachine: (cert-expiration-355262) DBG |   
	I0510 19:14:56.075865  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | </network>
	I0510 19:14:56.075887  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | 
	I0510 19:14:56.081627  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | trying to create private KVM network mk-cert-expiration-355262 192.168.61.0/24...
	I0510 19:14:56.172296  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | private KVM network mk-cert-expiration-355262 192.168.61.0/24 created
	I0510 19:14:56.172322  438986 main.go:141] libmachine: (cert-expiration-355262) setting up store path in /home/jenkins/minikube-integration/20720-388787/.minikube/machines/cert-expiration-355262 ...
	I0510 19:14:56.172335  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | I0510 19:14:56.172270  439059 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20720-388787/.minikube
	I0510 19:14:56.172345  438986 main.go:141] libmachine: (cert-expiration-355262) building disk image from file:///home/jenkins/minikube-integration/20720-388787/.minikube/cache/iso/amd64/minikube-v1.35.0-1746739450-20720-amd64.iso
	I0510 19:14:56.172483  438986 main.go:141] libmachine: (cert-expiration-355262) Downloading /home/jenkins/minikube-integration/20720-388787/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20720-388787/.minikube/cache/iso/amd64/minikube-v1.35.0-1746739450-20720-amd64.iso...
	I0510 19:14:56.486960  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | I0510 19:14:56.486815  439059 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/cert-expiration-355262/id_rsa...
	I0510 19:14:56.526378  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | I0510 19:14:56.526175  439059 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/cert-expiration-355262/cert-expiration-355262.rawdisk...
	I0510 19:14:56.526407  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | Writing magic tar header
	I0510 19:14:56.526426  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | Writing SSH key tar header
	I0510 19:14:56.526442  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | I0510 19:14:56.526309  439059 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20720-388787/.minikube/machines/cert-expiration-355262 ...
	I0510 19:14:56.526456  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/cert-expiration-355262
	I0510 19:14:56.526468  438986 main.go:141] libmachine: (cert-expiration-355262) setting executable bit set on /home/jenkins/minikube-integration/20720-388787/.minikube/machines/cert-expiration-355262 (perms=drwx------)
	I0510 19:14:56.526477  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20720-388787/.minikube/machines
	I0510 19:14:56.526497  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20720-388787/.minikube
	I0510 19:14:56.526505  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20720-388787
	I0510 19:14:56.526516  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0510 19:14:56.526523  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | checking permissions on dir: /home/jenkins
	I0510 19:14:56.526531  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | checking permissions on dir: /home
	I0510 19:14:56.526537  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | skipping /home - not owner
	I0510 19:14:56.526547  438986 main.go:141] libmachine: (cert-expiration-355262) setting executable bit set on /home/jenkins/minikube-integration/20720-388787/.minikube/machines (perms=drwxr-xr-x)
	I0510 19:14:56.526558  438986 main.go:141] libmachine: (cert-expiration-355262) setting executable bit set on /home/jenkins/minikube-integration/20720-388787/.minikube (perms=drwxr-xr-x)
	I0510 19:14:56.526566  438986 main.go:141] libmachine: (cert-expiration-355262) setting executable bit set on /home/jenkins/minikube-integration/20720-388787 (perms=drwxrwxr-x)
	I0510 19:14:56.526597  438986 main.go:141] libmachine: (cert-expiration-355262) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0510 19:14:56.526607  438986 main.go:141] libmachine: (cert-expiration-355262) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0510 19:14:56.526615  438986 main.go:141] libmachine: (cert-expiration-355262) creating domain...
	I0510 19:14:56.528079  438986 main.go:141] libmachine: (cert-expiration-355262) define libvirt domain using xml: 
	I0510 19:14:56.528088  438986 main.go:141] libmachine: (cert-expiration-355262) <domain type='kvm'>
	I0510 19:14:56.528093  438986 main.go:141] libmachine: (cert-expiration-355262)   <name>cert-expiration-355262</name>
	I0510 19:14:56.528099  438986 main.go:141] libmachine: (cert-expiration-355262)   <memory unit='MiB'>2048</memory>
	I0510 19:14:56.528106  438986 main.go:141] libmachine: (cert-expiration-355262)   <vcpu>2</vcpu>
	I0510 19:14:56.528110  438986 main.go:141] libmachine: (cert-expiration-355262)   <features>
	I0510 19:14:56.528120  438986 main.go:141] libmachine: (cert-expiration-355262)     <acpi/>
	I0510 19:14:56.528125  438986 main.go:141] libmachine: (cert-expiration-355262)     <apic/>
	I0510 19:14:56.528132  438986 main.go:141] libmachine: (cert-expiration-355262)     <pae/>
	I0510 19:14:56.528137  438986 main.go:141] libmachine: (cert-expiration-355262)     
	I0510 19:14:56.528143  438986 main.go:141] libmachine: (cert-expiration-355262)   </features>
	I0510 19:14:56.528147  438986 main.go:141] libmachine: (cert-expiration-355262)   <cpu mode='host-passthrough'>
	I0510 19:14:56.528151  438986 main.go:141] libmachine: (cert-expiration-355262)   
	I0510 19:14:56.528154  438986 main.go:141] libmachine: (cert-expiration-355262)   </cpu>
	I0510 19:14:56.528157  438986 main.go:141] libmachine: (cert-expiration-355262)   <os>
	I0510 19:14:56.528166  438986 main.go:141] libmachine: (cert-expiration-355262)     <type>hvm</type>
	I0510 19:14:56.528170  438986 main.go:141] libmachine: (cert-expiration-355262)     <boot dev='cdrom'/>
	I0510 19:14:56.528173  438986 main.go:141] libmachine: (cert-expiration-355262)     <boot dev='hd'/>
	I0510 19:14:56.528178  438986 main.go:141] libmachine: (cert-expiration-355262)     <bootmenu enable='no'/>
	I0510 19:14:56.528181  438986 main.go:141] libmachine: (cert-expiration-355262)   </os>
	I0510 19:14:56.528184  438986 main.go:141] libmachine: (cert-expiration-355262)   <devices>
	I0510 19:14:56.528188  438986 main.go:141] libmachine: (cert-expiration-355262)     <disk type='file' device='cdrom'>
	I0510 19:14:56.528195  438986 main.go:141] libmachine: (cert-expiration-355262)       <source file='/home/jenkins/minikube-integration/20720-388787/.minikube/machines/cert-expiration-355262/boot2docker.iso'/>
	I0510 19:14:56.528205  438986 main.go:141] libmachine: (cert-expiration-355262)       <target dev='hdc' bus='scsi'/>
	I0510 19:14:56.528209  438986 main.go:141] libmachine: (cert-expiration-355262)       <readonly/>
	I0510 19:14:56.528212  438986 main.go:141] libmachine: (cert-expiration-355262)     </disk>
	I0510 19:14:56.528230  438986 main.go:141] libmachine: (cert-expiration-355262)     <disk type='file' device='disk'>
	I0510 19:14:56.528234  438986 main.go:141] libmachine: (cert-expiration-355262)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0510 19:14:56.528242  438986 main.go:141] libmachine: (cert-expiration-355262)       <source file='/home/jenkins/minikube-integration/20720-388787/.minikube/machines/cert-expiration-355262/cert-expiration-355262.rawdisk'/>
	I0510 19:14:56.528245  438986 main.go:141] libmachine: (cert-expiration-355262)       <target dev='hda' bus='virtio'/>
	I0510 19:14:56.528249  438986 main.go:141] libmachine: (cert-expiration-355262)     </disk>
	I0510 19:14:56.528252  438986 main.go:141] libmachine: (cert-expiration-355262)     <interface type='network'>
	I0510 19:14:56.528256  438986 main.go:141] libmachine: (cert-expiration-355262)       <source network='mk-cert-expiration-355262'/>
	I0510 19:14:56.528260  438986 main.go:141] libmachine: (cert-expiration-355262)       <model type='virtio'/>
	I0510 19:14:56.528264  438986 main.go:141] libmachine: (cert-expiration-355262)     </interface>
	I0510 19:14:56.528267  438986 main.go:141] libmachine: (cert-expiration-355262)     <interface type='network'>
	I0510 19:14:56.528273  438986 main.go:141] libmachine: (cert-expiration-355262)       <source network='default'/>
	I0510 19:14:56.528279  438986 main.go:141] libmachine: (cert-expiration-355262)       <model type='virtio'/>
	I0510 19:14:56.528286  438986 main.go:141] libmachine: (cert-expiration-355262)     </interface>
	I0510 19:14:56.528291  438986 main.go:141] libmachine: (cert-expiration-355262)     <serial type='pty'>
	I0510 19:14:56.528298  438986 main.go:141] libmachine: (cert-expiration-355262)       <target port='0'/>
	I0510 19:14:56.528303  438986 main.go:141] libmachine: (cert-expiration-355262)     </serial>
	I0510 19:14:56.528309  438986 main.go:141] libmachine: (cert-expiration-355262)     <console type='pty'>
	I0510 19:14:56.528314  438986 main.go:141] libmachine: (cert-expiration-355262)       <target type='serial' port='0'/>
	I0510 19:14:56.528321  438986 main.go:141] libmachine: (cert-expiration-355262)     </console>
	I0510 19:14:56.528332  438986 main.go:141] libmachine: (cert-expiration-355262)     <rng model='virtio'>
	I0510 19:14:56.528341  438986 main.go:141] libmachine: (cert-expiration-355262)       <backend model='random'>/dev/random</backend>
	I0510 19:14:56.528347  438986 main.go:141] libmachine: (cert-expiration-355262)     </rng>
	I0510 19:14:56.528353  438986 main.go:141] libmachine: (cert-expiration-355262)     
	I0510 19:14:56.528357  438986 main.go:141] libmachine: (cert-expiration-355262)     
	I0510 19:14:56.528363  438986 main.go:141] libmachine: (cert-expiration-355262)   </devices>
	I0510 19:14:56.528367  438986 main.go:141] libmachine: (cert-expiration-355262) </domain>
	I0510 19:14:56.528378  438986 main.go:141] libmachine: (cert-expiration-355262) 
	I0510 19:14:56.533544  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | domain cert-expiration-355262 has defined MAC address 52:54:00:fd:5f:1d in network default
	I0510 19:14:56.534245  438986 main.go:141] libmachine: (cert-expiration-355262) starting domain...
	I0510 19:14:56.534268  438986 main.go:141] libmachine: (cert-expiration-355262) ensuring networks are active...
	I0510 19:14:56.534278  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | domain cert-expiration-355262 has defined MAC address 52:54:00:dd:9e:3a in network mk-cert-expiration-355262
	I0510 19:14:56.535023  438986 main.go:141] libmachine: (cert-expiration-355262) Ensuring network default is active
	I0510 19:14:56.535319  438986 main.go:141] libmachine: (cert-expiration-355262) Ensuring network mk-cert-expiration-355262 is active
	I0510 19:14:56.535875  438986 main.go:141] libmachine: (cert-expiration-355262) getting domain XML...
	I0510 19:14:56.536834  438986 main.go:141] libmachine: (cert-expiration-355262) creating domain...
	I0510 19:14:57.872647  438986 main.go:141] libmachine: (cert-expiration-355262) waiting for IP...
	I0510 19:14:57.873429  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | domain cert-expiration-355262 has defined MAC address 52:54:00:dd:9e:3a in network mk-cert-expiration-355262
	I0510 19:14:57.874139  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | unable to find current IP address of domain cert-expiration-355262 in network mk-cert-expiration-355262
	I0510 19:14:57.874277  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | I0510 19:14:57.874151  439059 retry.go:31] will retry after 219.598489ms: waiting for domain to come up
	I0510 19:14:58.096150  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | domain cert-expiration-355262 has defined MAC address 52:54:00:dd:9e:3a in network mk-cert-expiration-355262
	I0510 19:14:58.096683  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | unable to find current IP address of domain cert-expiration-355262 in network mk-cert-expiration-355262
	I0510 19:14:58.096747  438986 main.go:141] libmachine: (cert-expiration-355262) DBG | I0510 19:14:58.096643  439059 retry.go:31] will retry after 376.155606ms: waiting for domain to come up
	W0510 19:14:55.437513  438136 pod_ready.go:104] pod "etcd-pause-317241" is not "Ready", error: <nil>
	I0510 19:14:56.941142  438136 pod_ready.go:94] pod "etcd-pause-317241" is "Ready"
	I0510 19:14:56.941176  438136 pod_ready.go:86] duration metric: took 10.011442056s for pod "etcd-pause-317241" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:14:56.943699  438136 pod_ready.go:83] waiting for pod "kube-apiserver-pause-317241" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:14:58.456585  438136 pod_ready.go:94] pod "kube-apiserver-pause-317241" is "Ready"
	I0510 19:14:58.456639  438136 pod_ready.go:86] duration metric: took 1.512907922s for pod "kube-apiserver-pause-317241" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:14:58.461913  438136 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-317241" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:14:58.470389  438136 pod_ready.go:94] pod "kube-controller-manager-pause-317241" is "Ready"
	I0510 19:14:58.470422  438136 pod_ready.go:86] duration metric: took 8.476599ms for pod "kube-controller-manager-pause-317241" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:14:58.474274  438136 pod_ready.go:83] waiting for pod "kube-proxy-skvbp" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:14:58.482439  438136 pod_ready.go:94] pod "kube-proxy-skvbp" is "Ready"
	I0510 19:14:58.482483  438136 pod_ready.go:86] duration metric: took 8.176534ms for pod "kube-proxy-skvbp" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:14:58.535212  438136 pod_ready.go:83] waiting for pod "kube-scheduler-pause-317241" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:14:58.934588  438136 pod_ready.go:94] pod "kube-scheduler-pause-317241" is "Ready"
	I0510 19:14:58.934617  438136 pod_ready.go:86] duration metric: took 399.338323ms for pod "kube-scheduler-pause-317241" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:14:58.934628  438136 pod_ready.go:40] duration metric: took 14.024775546s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0510 19:14:58.990570  438136 start.go:607] kubectl: 1.33.0, cluster: 1.33.0 (minor skew: 0)
	I0510 19:14:58.992949  438136 out.go:177] * Done! kubectl is now configured to use "pause-317241" cluster and "default" namespace by default
	I0510 19:14:58.142826  435640 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:14:58.143140  435640 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	
	==> CRI-O <==
	May 10 19:15:02 pause-317241 crio[3017]: time="2025-05-10 19:15:02.538848794Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746904502538802541,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125819,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=304a51be-feb8-4e45-8419-2106a7fdbd0b name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:15:02 pause-317241 crio[3017]: time="2025-05-10 19:15:02.539631297Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0dde629c-feec-4c40-8ca3-21a9c7e41238 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:15:02 pause-317241 crio[3017]: time="2025-05-10 19:15:02.539810836Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0dde629c-feec-4c40-8ca3-21a9c7e41238 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:15:02 pause-317241 crio[3017]: time="2025-05-10 19:15:02.540176539Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34e6c248131a5c5fe1f7df747e0077aab1986c049aba07113268029aa19ef292,PodSandboxId:fff448915d6c225f768a88c4107b0c411b288f3400557477da15aa1eef0285db,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1746904484029414801,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-2cc2n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1ecbdbb-8d9b-4ecf-a9a2-94d3478e1128,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2014853cb47d41b0658ce88cece8340e54d300cc95cc1ee4c8b1c6164a3e0fd4,PodSandboxId:5e10d365a839c16e827ee6151e426b67c48efefa36ada8ccdd191eedeec26997,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_RUNNING,CreatedAt:1746904483425536227,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-skvbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 08543e5b-1085-4de5-9922-16d2a027fb0e,},Annotations:map[string]string{io.kubernetes.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df19d449ad869e6c3b02da7edbd6ffb58d12f2d727f816ea15867bd4aa08d16,PodSandboxId:a39b3edf7ec64a2e10ac544b96954dd9ddcd04d24f72b9cddccfdee6ecf71de7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,State:CONTAINER_RUNNING,CreatedAt:1746904478790268218,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21bc14276c
d5381e92e6d9f4fa417bb5,},Annotations:map[string]string{io.kubernetes.container.hash: fd54b99d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b2b5024e1df577f880cf4796775a8b53c2f1235f0b94dfa501a7e713354a4dc,PodSandboxId:3800a382e0bce1f8041ef956ce5cf003fc7b3dd72c393bf2815edc04d3bb0fe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_RUNNING,CreatedAt:1746904478510784300,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
bd3407a5515906aae2ca3170d960a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d201484550c495e4de0fd8ba3f315ff5ee277b1f86a0dccd9c456e1bbc901089,PodSandboxId:90fc3520c3595d665025a9ed61fba9da3eafa398cec8e92f62e71365e516d7e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1746904478450688392,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a93f06e9953cac36959843399c2f269,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ac494014a96938db7a564fb5551c332727ec3747c4cbeadb8f0171a5dfbf786,PodSandboxId:b2b7064d6a453b2b80292646fc74a1172056293617f21148146f0c58df8aaa70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,State:CONTAINER_RUNNING,CreatedAt:1746904478424395512,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0714c860dd7b384d9ba10850530c253,},Annotations:map[string]string{io.
kubernetes.container.hash: 2e2dc675,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60d3f14ff709f7784986c33fb2487c2f5a652445cbba518e696441f701e452fb,PodSandboxId:856bd62f83b4aa95e06e560c66768f23474ea1a63edd1a9e38cccbb4abed762f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1746904382352867917,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-2cc2n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1ecbdbb-8d9b-4ecf-a9a2-94d3478e1128,},Annotations:map[string]string{io.kubernetes.container.hash: eafd0
92d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9945e09f6b6d261039029f89e168d45dd9fc4acf65b417b637a7704d3cc6df5,PodSandboxId:9b2c1f7aa1ba54b4aeb16b05ffcd2872a4d7af4e11f7ae21d5377762d1f6735a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_EXITED,CreatedAt:1746904381974441911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-skvbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08543e5b-1085-4de5-9922-16d2a027fb0e,},Annotations:map[string]string{io.kubernetes.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b96ba0c6867681eb5c3dd0df167dc56dd09ffcb675f8fa26472566e54feb7385,PodSandboxId:be700641b4f491936267c67677c6d291d70dd5bbb8ecdd6364b8f62f336bc473,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,State:CONTAINER_EXITED,CreatedAt:1746904368930343150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-paus
e-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21bc14276cd5381e92e6d9f4fa417bb5,},Annotations:map[string]string{io.kubernetes.container.hash: fd54b99d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6989a7e3ea042c054e6f979c8042e6a4f7c82fab32f4778857b936239f6db91c,PodSandboxId:e7cc467ddfff88dbdf134a7d835cc4352ab0454ce0419546d656884e172cf011,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1746904368903169884,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-317241,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: 1a93f06e9953cac36959843399c2f269,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b84f77943081f73cd80a1376987cceac5bbcb6932aaab74ffc59f9400d903650,PodSandboxId:afbad45567ddd6cd513797e628303d6d56e830f7a22f41bf1e684135608a128a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,State:CONTAINER_EXITED,CreatedAt:1746904368821503920,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a0714c860dd7b384d9ba10850530c253,},Annotations:map[string]string{io.kubernetes.container.hash: 2e2dc675,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac36579b810dd23a78153742e60a40498c4c2744c1c1b600d92974993419a57,PodSandboxId:8c48f0b1b59eb870657c77354fe231a0f6694e2aa715e77ecab6ba4083920287,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_EXITED,CreatedAt:1746904368701215523,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 2bd3407a5515906aae2ca3170d960a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0dde629c-feec-4c40-8ca3-21a9c7e41238 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:15:02 pause-317241 crio[3017]: time="2025-05-10 19:15:02.601892645Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=066ea948-9880-44f6-9a60-b03235cb1bc3 name=/runtime.v1.RuntimeService/Version
	May 10 19:15:02 pause-317241 crio[3017]: time="2025-05-10 19:15:02.602047731Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=066ea948-9880-44f6-9a60-b03235cb1bc3 name=/runtime.v1.RuntimeService/Version
	May 10 19:15:02 pause-317241 crio[3017]: time="2025-05-10 19:15:02.605980050Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d5e502b2-fef1-48b4-8140-38e10246da3a name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:15:02 pause-317241 crio[3017]: time="2025-05-10 19:15:02.606575889Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746904502606529612,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125819,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d5e502b2-fef1-48b4-8140-38e10246da3a name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:15:02 pause-317241 crio[3017]: time="2025-05-10 19:15:02.607811914Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d23d7bb3-6760-4106-9da0-3835cfd6fba2 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:15:02 pause-317241 crio[3017]: time="2025-05-10 19:15:02.607896223Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d23d7bb3-6760-4106-9da0-3835cfd6fba2 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:15:02 pause-317241 crio[3017]: time="2025-05-10 19:15:02.608165675Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34e6c248131a5c5fe1f7df747e0077aab1986c049aba07113268029aa19ef292,PodSandboxId:fff448915d6c225f768a88c4107b0c411b288f3400557477da15aa1eef0285db,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1746904484029414801,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-2cc2n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1ecbdbb-8d9b-4ecf-a9a2-94d3478e1128,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2014853cb47d41b0658ce88cece8340e54d300cc95cc1ee4c8b1c6164a3e0fd4,PodSandboxId:5e10d365a839c16e827ee6151e426b67c48efefa36ada8ccdd191eedeec26997,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_RUNNING,CreatedAt:1746904483425536227,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-skvbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 08543e5b-1085-4de5-9922-16d2a027fb0e,},Annotations:map[string]string{io.kubernetes.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df19d449ad869e6c3b02da7edbd6ffb58d12f2d727f816ea15867bd4aa08d16,PodSandboxId:a39b3edf7ec64a2e10ac544b96954dd9ddcd04d24f72b9cddccfdee6ecf71de7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,State:CONTAINER_RUNNING,CreatedAt:1746904478790268218,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21bc14276c
d5381e92e6d9f4fa417bb5,},Annotations:map[string]string{io.kubernetes.container.hash: fd54b99d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b2b5024e1df577f880cf4796775a8b53c2f1235f0b94dfa501a7e713354a4dc,PodSandboxId:3800a382e0bce1f8041ef956ce5cf003fc7b3dd72c393bf2815edc04d3bb0fe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_RUNNING,CreatedAt:1746904478510784300,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
bd3407a5515906aae2ca3170d960a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d201484550c495e4de0fd8ba3f315ff5ee277b1f86a0dccd9c456e1bbc901089,PodSandboxId:90fc3520c3595d665025a9ed61fba9da3eafa398cec8e92f62e71365e516d7e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1746904478450688392,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a93f06e9953cac36959843399c2f269,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ac494014a96938db7a564fb5551c332727ec3747c4cbeadb8f0171a5dfbf786,PodSandboxId:b2b7064d6a453b2b80292646fc74a1172056293617f21148146f0c58df8aaa70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,State:CONTAINER_RUNNING,CreatedAt:1746904478424395512,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0714c860dd7b384d9ba10850530c253,},Annotations:map[string]string{io.
kubernetes.container.hash: 2e2dc675,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60d3f14ff709f7784986c33fb2487c2f5a652445cbba518e696441f701e452fb,PodSandboxId:856bd62f83b4aa95e06e560c66768f23474ea1a63edd1a9e38cccbb4abed762f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1746904382352867917,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-2cc2n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1ecbdbb-8d9b-4ecf-a9a2-94d3478e1128,},Annotations:map[string]string{io.kubernetes.container.hash: eafd0
92d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9945e09f6b6d261039029f89e168d45dd9fc4acf65b417b637a7704d3cc6df5,PodSandboxId:9b2c1f7aa1ba54b4aeb16b05ffcd2872a4d7af4e11f7ae21d5377762d1f6735a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_EXITED,CreatedAt:1746904381974441911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-skvbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08543e5b-1085-4de5-9922-16d2a027fb0e,},Annotations:map[string]string{io.kubernetes.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b96ba0c6867681eb5c3dd0df167dc56dd09ffcb675f8fa26472566e54feb7385,PodSandboxId:be700641b4f491936267c67677c6d291d70dd5bbb8ecdd6364b8f62f336bc473,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,State:CONTAINER_EXITED,CreatedAt:1746904368930343150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-paus
e-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21bc14276cd5381e92e6d9f4fa417bb5,},Annotations:map[string]string{io.kubernetes.container.hash: fd54b99d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6989a7e3ea042c054e6f979c8042e6a4f7c82fab32f4778857b936239f6db91c,PodSandboxId:e7cc467ddfff88dbdf134a7d835cc4352ab0454ce0419546d656884e172cf011,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1746904368903169884,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-317241,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: 1a93f06e9953cac36959843399c2f269,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b84f77943081f73cd80a1376987cceac5bbcb6932aaab74ffc59f9400d903650,PodSandboxId:afbad45567ddd6cd513797e628303d6d56e830f7a22f41bf1e684135608a128a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,State:CONTAINER_EXITED,CreatedAt:1746904368821503920,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a0714c860dd7b384d9ba10850530c253,},Annotations:map[string]string{io.kubernetes.container.hash: 2e2dc675,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac36579b810dd23a78153742e60a40498c4c2744c1c1b600d92974993419a57,PodSandboxId:8c48f0b1b59eb870657c77354fe231a0f6694e2aa715e77ecab6ba4083920287,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_EXITED,CreatedAt:1746904368701215523,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 2bd3407a5515906aae2ca3170d960a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d23d7bb3-6760-4106-9da0-3835cfd6fba2 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:15:02 pause-317241 crio[3017]: time="2025-05-10 19:15:02.675995178Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e2242505-a664-48d5-8e5f-49533e320bf8 name=/runtime.v1.RuntimeService/Version
	May 10 19:15:02 pause-317241 crio[3017]: time="2025-05-10 19:15:02.676128749Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e2242505-a664-48d5-8e5f-49533e320bf8 name=/runtime.v1.RuntimeService/Version
	May 10 19:15:02 pause-317241 crio[3017]: time="2025-05-10 19:15:02.677571214Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=16f07008-bea1-4e5c-b0b0-129f8ae6ffbf name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:15:02 pause-317241 crio[3017]: time="2025-05-10 19:15:02.678255727Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746904502678220000,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125819,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=16f07008-bea1-4e5c-b0b0-129f8ae6ffbf name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:15:02 pause-317241 crio[3017]: time="2025-05-10 19:15:02.679516807Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c260b459-0f2d-42a7-a7e4-274f97b704e8 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:15:02 pause-317241 crio[3017]: time="2025-05-10 19:15:02.679943597Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c260b459-0f2d-42a7-a7e4-274f97b704e8 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:15:02 pause-317241 crio[3017]: time="2025-05-10 19:15:02.681389718Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34e6c248131a5c5fe1f7df747e0077aab1986c049aba07113268029aa19ef292,PodSandboxId:fff448915d6c225f768a88c4107b0c411b288f3400557477da15aa1eef0285db,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1746904484029414801,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-2cc2n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1ecbdbb-8d9b-4ecf-a9a2-94d3478e1128,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2014853cb47d41b0658ce88cece8340e54d300cc95cc1ee4c8b1c6164a3e0fd4,PodSandboxId:5e10d365a839c16e827ee6151e426b67c48efefa36ada8ccdd191eedeec26997,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_RUNNING,CreatedAt:1746904483425536227,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-skvbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 08543e5b-1085-4de5-9922-16d2a027fb0e,},Annotations:map[string]string{io.kubernetes.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df19d449ad869e6c3b02da7edbd6ffb58d12f2d727f816ea15867bd4aa08d16,PodSandboxId:a39b3edf7ec64a2e10ac544b96954dd9ddcd04d24f72b9cddccfdee6ecf71de7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,State:CONTAINER_RUNNING,CreatedAt:1746904478790268218,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21bc14276c
d5381e92e6d9f4fa417bb5,},Annotations:map[string]string{io.kubernetes.container.hash: fd54b99d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b2b5024e1df577f880cf4796775a8b53c2f1235f0b94dfa501a7e713354a4dc,PodSandboxId:3800a382e0bce1f8041ef956ce5cf003fc7b3dd72c393bf2815edc04d3bb0fe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_RUNNING,CreatedAt:1746904478510784300,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
bd3407a5515906aae2ca3170d960a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d201484550c495e4de0fd8ba3f315ff5ee277b1f86a0dccd9c456e1bbc901089,PodSandboxId:90fc3520c3595d665025a9ed61fba9da3eafa398cec8e92f62e71365e516d7e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1746904478450688392,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a93f06e9953cac36959843399c2f269,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ac494014a96938db7a564fb5551c332727ec3747c4cbeadb8f0171a5dfbf786,PodSandboxId:b2b7064d6a453b2b80292646fc74a1172056293617f21148146f0c58df8aaa70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,State:CONTAINER_RUNNING,CreatedAt:1746904478424395512,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0714c860dd7b384d9ba10850530c253,},Annotations:map[string]string{io.
kubernetes.container.hash: 2e2dc675,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60d3f14ff709f7784986c33fb2487c2f5a652445cbba518e696441f701e452fb,PodSandboxId:856bd62f83b4aa95e06e560c66768f23474ea1a63edd1a9e38cccbb4abed762f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1746904382352867917,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-2cc2n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1ecbdbb-8d9b-4ecf-a9a2-94d3478e1128,},Annotations:map[string]string{io.kubernetes.container.hash: eafd0
92d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9945e09f6b6d261039029f89e168d45dd9fc4acf65b417b637a7704d3cc6df5,PodSandboxId:9b2c1f7aa1ba54b4aeb16b05ffcd2872a4d7af4e11f7ae21d5377762d1f6735a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_EXITED,CreatedAt:1746904381974441911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-skvbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08543e5b-1085-4de5-9922-16d2a027fb0e,},Annotations:map[string]string{io.kubernetes.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b96ba0c6867681eb5c3dd0df167dc56dd09ffcb675f8fa26472566e54feb7385,PodSandboxId:be700641b4f491936267c67677c6d291d70dd5bbb8ecdd6364b8f62f336bc473,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,State:CONTAINER_EXITED,CreatedAt:1746904368930343150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-paus
e-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21bc14276cd5381e92e6d9f4fa417bb5,},Annotations:map[string]string{io.kubernetes.container.hash: fd54b99d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6989a7e3ea042c054e6f979c8042e6a4f7c82fab32f4778857b936239f6db91c,PodSandboxId:e7cc467ddfff88dbdf134a7d835cc4352ab0454ce0419546d656884e172cf011,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1746904368903169884,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-317241,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: 1a93f06e9953cac36959843399c2f269,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b84f77943081f73cd80a1376987cceac5bbcb6932aaab74ffc59f9400d903650,PodSandboxId:afbad45567ddd6cd513797e628303d6d56e830f7a22f41bf1e684135608a128a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,State:CONTAINER_EXITED,CreatedAt:1746904368821503920,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a0714c860dd7b384d9ba10850530c253,},Annotations:map[string]string{io.kubernetes.container.hash: 2e2dc675,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac36579b810dd23a78153742e60a40498c4c2744c1c1b600d92974993419a57,PodSandboxId:8c48f0b1b59eb870657c77354fe231a0f6694e2aa715e77ecab6ba4083920287,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_EXITED,CreatedAt:1746904368701215523,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 2bd3407a5515906aae2ca3170d960a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c260b459-0f2d-42a7-a7e4-274f97b704e8 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:15:02 pause-317241 crio[3017]: time="2025-05-10 19:15:02.745874603Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9cf6dd1e-0274-4d55-af26-7d4c9e427528 name=/runtime.v1.RuntimeService/Version
	May 10 19:15:02 pause-317241 crio[3017]: time="2025-05-10 19:15:02.745966936Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9cf6dd1e-0274-4d55-af26-7d4c9e427528 name=/runtime.v1.RuntimeService/Version
	May 10 19:15:02 pause-317241 crio[3017]: time="2025-05-10 19:15:02.749071459Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=79398829-f3c6-43e1-846f-0865ed655c02 name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:15:02 pause-317241 crio[3017]: time="2025-05-10 19:15:02.753811622Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746904502753686059,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125819,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=79398829-f3c6-43e1-846f-0865ed655c02 name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:15:02 pause-317241 crio[3017]: time="2025-05-10 19:15:02.757220310Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cf0a0fb8-e81e-4fee-9312-703a957ffed1 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:15:02 pause-317241 crio[3017]: time="2025-05-10 19:15:02.757442039Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cf0a0fb8-e81e-4fee-9312-703a957ffed1 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:15:02 pause-317241 crio[3017]: time="2025-05-10 19:15:02.764224428Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34e6c248131a5c5fe1f7df747e0077aab1986c049aba07113268029aa19ef292,PodSandboxId:fff448915d6c225f768a88c4107b0c411b288f3400557477da15aa1eef0285db,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1746904484029414801,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-2cc2n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1ecbdbb-8d9b-4ecf-a9a2-94d3478e1128,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2014853cb47d41b0658ce88cece8340e54d300cc95cc1ee4c8b1c6164a3e0fd4,PodSandboxId:5e10d365a839c16e827ee6151e426b67c48efefa36ada8ccdd191eedeec26997,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_RUNNING,CreatedAt:1746904483425536227,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-skvbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 08543e5b-1085-4de5-9922-16d2a027fb0e,},Annotations:map[string]string{io.kubernetes.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df19d449ad869e6c3b02da7edbd6ffb58d12f2d727f816ea15867bd4aa08d16,PodSandboxId:a39b3edf7ec64a2e10ac544b96954dd9ddcd04d24f72b9cddccfdee6ecf71de7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,State:CONTAINER_RUNNING,CreatedAt:1746904478790268218,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21bc14276c
d5381e92e6d9f4fa417bb5,},Annotations:map[string]string{io.kubernetes.container.hash: fd54b99d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b2b5024e1df577f880cf4796775a8b53c2f1235f0b94dfa501a7e713354a4dc,PodSandboxId:3800a382e0bce1f8041ef956ce5cf003fc7b3dd72c393bf2815edc04d3bb0fe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_RUNNING,CreatedAt:1746904478510784300,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
bd3407a5515906aae2ca3170d960a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d201484550c495e4de0fd8ba3f315ff5ee277b1f86a0dccd9c456e1bbc901089,PodSandboxId:90fc3520c3595d665025a9ed61fba9da3eafa398cec8e92f62e71365e516d7e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1746904478450688392,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a93f06e9953cac36959843399c2f269,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ac494014a96938db7a564fb5551c332727ec3747c4cbeadb8f0171a5dfbf786,PodSandboxId:b2b7064d6a453b2b80292646fc74a1172056293617f21148146f0c58df8aaa70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,State:CONTAINER_RUNNING,CreatedAt:1746904478424395512,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0714c860dd7b384d9ba10850530c253,},Annotations:map[string]string{io.
kubernetes.container.hash: 2e2dc675,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60d3f14ff709f7784986c33fb2487c2f5a652445cbba518e696441f701e452fb,PodSandboxId:856bd62f83b4aa95e06e560c66768f23474ea1a63edd1a9e38cccbb4abed762f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1746904382352867917,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-2cc2n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1ecbdbb-8d9b-4ecf-a9a2-94d3478e1128,},Annotations:map[string]string{io.kubernetes.container.hash: eafd0
92d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9945e09f6b6d261039029f89e168d45dd9fc4acf65b417b637a7704d3cc6df5,PodSandboxId:9b2c1f7aa1ba54b4aeb16b05ffcd2872a4d7af4e11f7ae21d5377762d1f6735a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68,State:CONTAINER_EXITED,CreatedAt:1746904381974441911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-skvbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08543e5b-1085-4de5-9922-16d2a027fb0e,},Annotations:map[string]string{io.kubernetes.container.hash: 2406bd3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b96ba0c6867681eb5c3dd0df167dc56dd09ffcb675f8fa26472566e54feb7385,PodSandboxId:be700641b4f491936267c67677c6d291d70dd5bbb8ecdd6364b8f62f336bc473,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4,State:CONTAINER_EXITED,CreatedAt:1746904368930343150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-paus
e-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21bc14276cd5381e92e6d9f4fa417bb5,},Annotations:map[string]string{io.kubernetes.container.hash: fd54b99d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6989a7e3ea042c054e6f979c8042e6a4f7c82fab32f4778857b936239f6db91c,PodSandboxId:e7cc467ddfff88dbdf134a7d835cc4352ab0454ce0419546d656884e172cf011,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1746904368903169884,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-317241,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: 1a93f06e9953cac36959843399c2f269,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b84f77943081f73cd80a1376987cceac5bbcb6932aaab74ffc59f9400d903650,PodSandboxId:afbad45567ddd6cd513797e628303d6d56e830f7a22f41bf1e684135608a128a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4,State:CONTAINER_EXITED,CreatedAt:1746904368821503920,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a0714c860dd7b384d9ba10850530c253,},Annotations:map[string]string{io.kubernetes.container.hash: 2e2dc675,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac36579b810dd23a78153742e60a40498c4c2744c1c1b600d92974993419a57,PodSandboxId:8c48f0b1b59eb870657c77354fe231a0f6694e2aa715e77ecab6ba4083920287,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02,State:CONTAINER_EXITED,CreatedAt:1746904368701215523,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-317241,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 2bd3407a5515906aae2ca3170d960a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 20846f37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cf0a0fb8-e81e-4fee-9312-703a957ffed1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	34e6c248131a5       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b   19 seconds ago      Running             coredns                   1                   fff448915d6c2       coredns-674b8bbfcf-2cc2n
	2014853cb47d4       f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68   19 seconds ago      Running             kube-proxy                1                   5e10d365a839c       kube-proxy-skvbp
	3df19d449ad86       8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4   24 seconds ago      Running             kube-scheduler            1                   a39b3edf7ec64       kube-scheduler-pause-317241
	5b2b5024e1df5       1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02   24 seconds ago      Running             kube-controller-manager   1                   3800a382e0bce       kube-controller-manager-pause-317241
	d201484550c49       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1   24 seconds ago      Running             etcd                      1                   90fc3520c3595       etcd-pause-317241
	2ac494014a969       6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4   24 seconds ago      Running             kube-apiserver            1                   b2b7064d6a453       kube-apiserver-pause-317241
	60d3f14ff709f       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b   2 minutes ago       Exited              coredns                   0                   856bd62f83b4a       coredns-674b8bbfcf-2cc2n
	c9945e09f6b6d       f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68   2 minutes ago       Exited              kube-proxy                0                   9b2c1f7aa1ba5       kube-proxy-skvbp
	b96ba0c686768       8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4   2 minutes ago       Exited              kube-scheduler            0                   be700641b4f49       kube-scheduler-pause-317241
	6989a7e3ea042       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1   2 minutes ago       Exited              etcd                      0                   e7cc467ddfff8       etcd-pause-317241
	b84f77943081f       6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4   2 minutes ago       Exited              kube-apiserver            0                   afbad45567ddd       kube-apiserver-pause-317241
	5ac36579b810d       1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02   2 minutes ago       Exited              kube-controller-manager   0                   8c48f0b1b59eb       kube-controller-manager-pause-317241
	
	
	==> coredns [34e6c248131a5c5fe1f7df747e0077aab1986c049aba07113268029aa19ef292] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] 127.0.0.1:42670 - 30856 "HINFO IN 7684510342706217908.6727844740725176358. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016294319s
	
	
	==> coredns [60d3f14ff709f7784986c33fb2487c2f5a652445cbba518e696441f701e452fb] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] 127.0.0.1:47641 - 22513 "HINFO IN 1095390168192950703.4527940315372221367. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019643509s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-317241
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-317241
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e96c83983357cd8557f3cdfe077a25cc73d485a4
	                    minikube.k8s.io/name=pause-317241
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_05_10T19_12_56_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 May 2025 19:12:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-317241
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 May 2025 19:15:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 May 2025 19:14:42 +0000   Sat, 10 May 2025 19:12:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 May 2025 19:14:42 +0000   Sat, 10 May 2025 19:12:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 May 2025 19:14:42 +0000   Sat, 10 May 2025 19:12:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 May 2025 19:14:42 +0000   Sat, 10 May 2025 19:12:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.10
	  Hostname:    pause-317241
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015664Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015664Ki
	  pods:               110
	System Info:
	  Machine ID:                 f4188d6039414f158be6e0dfa4fac62c
	  System UUID:                f4188d60-3941-4f15-8be6-e0dfa4fac62c
	  Boot ID:                    9e56bab3-4e29-44a7-88b2-5d509d360c89
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2024.11.2
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.33.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-674b8bbfcf-2cc2n                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     2m2s
	  kube-system                 etcd-pause-317241                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         2m7s
	  kube-system                 kube-apiserver-pause-317241             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-controller-manager-pause-317241    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-proxy-skvbp                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-scheduler-pause-317241             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 19s                    kube-proxy       
	  Normal  Starting                 2m                     kube-proxy       
	  Normal  NodeAllocatableEnforced  2m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     2m15s (x7 over 2m16s)  kubelet          Node pause-317241 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m15s (x8 over 2m16s)  kubelet          Node pause-317241 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m15s (x8 over 2m16s)  kubelet          Node pause-317241 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m8s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m8s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     2m7s                   kubelet          Node pause-317241 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m7s                   kubelet          Node pause-317241 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                2m7s                   kubelet          Node pause-317241 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  2m7s                   kubelet          Node pause-317241 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           2m3s                   node-controller  Node pause-317241 event: Registered Node pause-317241 in Controller
	  Normal  Starting                 26s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)      kubelet          Node pause-317241 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)      kubelet          Node pause-317241 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 25s)      kubelet          Node pause-317241 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18s                    node-controller  Node pause-317241 event: Registered Node pause-317241 in Controller
	
	
	==> dmesg <==
	[May10 19:12] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.000002] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.000035] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.001438] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.006667] (rpcbind)[143]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.135091] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000005] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.103297] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.117767] kauditd_printk_skb: 46 callbacks suppressed
	[  +0.119071] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.155064] kauditd_printk_skb: 67 callbacks suppressed
	[May10 19:13] kauditd_printk_skb: 19 callbacks suppressed
	[ +10.930868] kauditd_printk_skb: 66 callbacks suppressed
	[ +24.886516] kauditd_printk_skb: 22 callbacks suppressed
	[May10 19:14] kauditd_printk_skb: 105 callbacks suppressed
	[  +0.000086] kauditd_printk_skb: 39 callbacks suppressed
	
	
	==> etcd [6989a7e3ea042c054e6f979c8042e6a4f7c82fab32f4778857b936239f6db91c] <==
	{"level":"info","ts":"2025-05-10T19:12:49.424578Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e became leader at term 2"}
	{"level":"info","ts":"2025-05-10T19:12:49.424672Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f8926bd555ec3d0e elected leader f8926bd555ec3d0e at term 2"}
	{"level":"info","ts":"2025-05-10T19:12:49.427557Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"f8926bd555ec3d0e","local-member-attributes":"{Name:pause-317241 ClientURLs:[https://192.168.39.10:2379]}","request-path":"/0/members/f8926bd555ec3d0e/attributes","cluster-id":"3a710b3f69152e32","publish-timeout":"7s"}
	{"level":"info","ts":"2025-05-10T19:12:49.427793Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T19:12:49.428362Z","caller":"etcdserver/server.go:2697","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-05-10T19:12:49.434259Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T19:12:49.440811Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.10:2379"}
	{"level":"info","ts":"2025-05-10T19:12:49.434614Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T19:12:49.436178Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-05-10T19:12:49.442225Z","caller":"membership/cluster.go:587","msg":"set initial cluster version","cluster-id":"3a710b3f69152e32","local-member-id":"f8926bd555ec3d0e","cluster-version":"3.5"}
	{"level":"info","ts":"2025-05-10T19:12:49.445201Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-05-10T19:12:49.445365Z","caller":"etcdserver/server.go:2721","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-05-10T19:12:49.445678Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-05-10T19:12:49.446482Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T19:12:49.447622Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-05-10T19:14:25.463058Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-05-10T19:14:25.463232Z","caller":"embed/etcd.go:408","msg":"closing etcd server","name":"pause-317241","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.10:2380"],"advertise-client-urls":["https://192.168.39.10:2379"]}
	{"level":"info","ts":"2025-05-10T19:14:25.539900Z","caller":"etcdserver/server.go:1546","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f8926bd555ec3d0e","current-leader-member-id":"f8926bd555ec3d0e"}
	{"level":"warn","ts":"2025-05-10T19:14:25.540154Z","caller":"embed/serve.go:235","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-05-10T19:14:25.540172Z","caller":"embed/serve.go:235","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.10:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-05-10T19:14:25.540468Z","caller":"embed/serve.go:237","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.10:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-05-10T19:14:25.540298Z","caller":"embed/serve.go:237","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-05-10T19:14:25.544036Z","caller":"embed/etcd.go:613","msg":"stopping serving peer traffic","address":"192.168.39.10:2380"}
	{"level":"info","ts":"2025-05-10T19:14:25.544214Z","caller":"embed/etcd.go:618","msg":"stopped serving peer traffic","address":"192.168.39.10:2380"}
	{"level":"info","ts":"2025-05-10T19:14:25.544264Z","caller":"embed/etcd.go:410","msg":"closed etcd server","name":"pause-317241","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.10:2380"],"advertise-client-urls":["https://192.168.39.10:2379"]}
	
	
	==> etcd [d201484550c495e4de0fd8ba3f315ff5ee277b1f86a0dccd9c456e1bbc901089] <==
	{"level":"info","ts":"2025-05-10T19:14:38.984901Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-05-10T19:14:38.985073Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-05-10T19:14:38.985108Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-05-10T19:14:38.986985Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T19:14:38.990684Z","caller":"embed/etcd.go:762","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-05-10T19:14:38.991360Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"f8926bd555ec3d0e","initial-advertise-peer-urls":["https://192.168.39.10:2380"],"listen-peer-urls":["https://192.168.39.10:2380"],"advertise-client-urls":["https://192.168.39.10:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.10:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-05-10T19:14:38.991486Z","caller":"embed/etcd.go:908","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-05-10T19:14:38.991845Z","caller":"embed/etcd.go:633","msg":"serving peer traffic","address":"192.168.39.10:2380"}
	{"level":"info","ts":"2025-05-10T19:14:38.992284Z","caller":"embed/etcd.go:603","msg":"cmux::serve","address":"192.168.39.10:2380"}
	{"level":"info","ts":"2025-05-10T19:14:39.324830Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e is starting a new election at term 2"}
	{"level":"info","ts":"2025-05-10T19:14:39.324956Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e became pre-candidate at term 2"}
	{"level":"info","ts":"2025-05-10T19:14:39.324998Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e received MsgPreVoteResp from f8926bd555ec3d0e at term 2"}
	{"level":"info","ts":"2025-05-10T19:14:39.325033Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e became candidate at term 3"}
	{"level":"info","ts":"2025-05-10T19:14:39.325138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e received MsgVoteResp from f8926bd555ec3d0e at term 3"}
	{"level":"info","ts":"2025-05-10T19:14:39.325192Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e became leader at term 3"}
	{"level":"info","ts":"2025-05-10T19:14:39.325222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f8926bd555ec3d0e elected leader f8926bd555ec3d0e at term 3"}
	{"level":"info","ts":"2025-05-10T19:14:39.334004Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"f8926bd555ec3d0e","local-member-attributes":"{Name:pause-317241 ClientURLs:[https://192.168.39.10:2379]}","request-path":"/0/members/f8926bd555ec3d0e/attributes","cluster-id":"3a710b3f69152e32","publish-timeout":"7s"}
	{"level":"info","ts":"2025-05-10T19:14:39.334132Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T19:14:39.336784Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-05-10T19:14:39.336836Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-05-10T19:14:39.334170Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T19:14:39.338288Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T19:14:39.339020Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.10:2379"}
	{"level":"info","ts":"2025-05-10T19:14:39.341169Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T19:14:39.343660Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:15:04 up 2 min,  0 user,  load average: 1.26, 0.57, 0.22
	Linux pause-317241 5.10.207 #1 SMP Fri May 9 03:49:24 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2024.11.2"
	
	
	==> kube-apiserver [2ac494014a96938db7a564fb5551c332727ec3747c4cbeadb8f0171a5dfbf786] <==
	I0510 19:14:42.048000       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0510 19:14:42.051861       1 shared_informer.go:357] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0510 19:14:42.052000       1 default_servicecidr_controller.go:136] Shutting down kubernetes-service-cidr-controller
	I0510 19:14:42.060410       1 cache.go:39] Caches are synced for autoregister controller
	I0510 19:14:42.060971       1 shared_informer.go:357] "Caches are synced" controller="ipallocator-repair-controller"
	I0510 19:14:42.061345       1 shared_informer.go:357] "Caches are synced" controller="node_authorizer"
	I0510 19:14:42.064449       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 19:14:42.073914       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0510 19:14:42.074441       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0510 19:14:42.074759       1 shared_informer.go:357] "Caches are synced" controller="configmaps"
	I0510 19:14:42.075770       1 shared_informer.go:357] "Caches are synced" controller="cluster_authentication_trust_controller"
	I0510 19:14:42.075818       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0510 19:14:42.075826       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0510 19:14:42.078308       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0510 19:14:42.079596       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0510 19:14:42.884386       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0510 19:14:43.002373       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0510 19:14:43.884951       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0510 19:14:44.007664       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0510 19:14:44.245831       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0510 19:14:44.324578       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0510 19:14:45.463864       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 19:14:45.603319       1 controller.go:667] quota admission added evaluator for: endpoints
	I0510 19:14:45.708024       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0510 19:14:45.855557       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [b84f77943081f73cd80a1376987cceac5bbcb6932aaab74ffc59f9400d903650] <==
	W0510 19:14:25.476669       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.476798       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.476967       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.477170       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.477309       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.477387       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.477499       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.477621       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.477771       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.477874       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.478002       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.478117       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.479240       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.479416       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.479551       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.479674       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.479908       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.480120       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.480242       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.480364       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.480549       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.480579       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.480695       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.480897       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0510 19:14:25.481009       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [5ac36579b810dd23a78153742e60a40498c4c2744c1c1b600d92974993419a57] <==
	I0510 19:13:00.306988       1 shared_informer.go:357] "Caches are synced" controller="node"
	I0510 19:13:00.308301       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0510 19:13:00.308374       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0510 19:13:00.308397       1 shared_informer.go:350] "Waiting for caches to sync" controller="cidrallocator"
	I0510 19:13:00.308415       1 shared_informer.go:357] "Caches are synced" controller="cidrallocator"
	I0510 19:13:00.320914       1 shared_informer.go:357] "Caches are synced" controller="ReplicationController"
	I0510 19:13:00.342666       1 shared_informer.go:357] "Caches are synced" controller="endpoint"
	I0510 19:13:00.342817       1 shared_informer.go:357] "Caches are synced" controller="bootstrap_signer"
	I0510 19:13:00.344347       1 shared_informer.go:357] "Caches are synced" controller="ClusterRoleAggregator"
	I0510 19:13:00.345471       1 shared_informer.go:357] "Caches are synced" controller="stateful set"
	I0510 19:13:00.351334       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-317241" podCIDRs=["10.244.0.0/24"]
	I0510 19:13:00.392698       1 shared_informer.go:357] "Caches are synced" controller="attach detach"
	I0510 19:13:00.392814       1 shared_informer.go:357] "Caches are synced" controller="persistent volume"
	I0510 19:13:00.499983       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0510 19:13:00.502930       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0510 19:13:00.504418       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0510 19:13:00.594171       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrapproving"
	I0510 19:13:00.598974       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 19:13:00.599238       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0510 19:13:00.614435       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 19:13:00.627576       1 shared_informer.go:357] "Caches are synced" controller="HPA"
	I0510 19:13:01.040774       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0510 19:13:01.040799       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0510 19:13:01.040806       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0510 19:13:01.049892       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [5b2b5024e1df577f880cf4796775a8b53c2f1235f0b94dfa501a7e713354a4dc] <==
	I0510 19:14:45.363505       1 shared_informer.go:357] "Caches are synced" controller="TTL after finished"
	I0510 19:14:45.366481       1 shared_informer.go:357] "Caches are synced" controller="attach detach"
	I0510 19:14:45.373838       1 shared_informer.go:357] "Caches are synced" controller="ClusterRoleAggregator"
	I0510 19:14:45.378273       1 shared_informer.go:357] "Caches are synced" controller="taint-eviction-controller"
	I0510 19:14:45.382845       1 shared_informer.go:357] "Caches are synced" controller="service-cidr-controller"
	I0510 19:14:45.393909       1 shared_informer.go:357] "Caches are synced" controller="job"
	I0510 19:14:45.399495       1 shared_informer.go:357] "Caches are synced" controller="namespace"
	I0510 19:14:45.399633       1 shared_informer.go:357] "Caches are synced" controller="endpoint"
	I0510 19:14:45.401646       1 shared_informer.go:357] "Caches are synced" controller="GC"
	I0510 19:14:45.401807       1 shared_informer.go:357] "Caches are synced" controller="taint"
	I0510 19:14:45.401941       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0510 19:14:45.402043       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-317241"
	I0510 19:14:45.402155       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0510 19:14:45.409262       1 shared_informer.go:357] "Caches are synced" controller="ephemeral"
	I0510 19:14:45.491978       1 shared_informer.go:357] "Caches are synced" controller="ReplicaSet"
	I0510 19:14:45.509292       1 shared_informer.go:357] "Caches are synced" controller="disruption"
	I0510 19:14:45.523087       1 shared_informer.go:357] "Caches are synced" controller="deployment"
	I0510 19:14:45.603642       1 shared_informer.go:357] "Caches are synced" controller="endpoint_slice"
	I0510 19:14:45.615487       1 shared_informer.go:357] "Caches are synced" controller="endpoint_slice_mirroring"
	I0510 19:14:45.687623       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 19:14:45.696762       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 19:14:46.129280       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0510 19:14:46.130590       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0510 19:14:46.130630       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0510 19:14:46.130641       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [2014853cb47d41b0658ce88cece8340e54d300cc95cc1ee4c8b1c6164a3e0fd4] <==
	E0510 19:14:43.844815       1 proxier.go:732] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0510 19:14:43.861147       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.39.10"]
	E0510 19:14:43.861233       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0510 19:14:43.978934       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0510 19:14:43.978972       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0510 19:14:43.979114       1 server_linux.go:145] "Using iptables Proxier"
	I0510 19:14:44.026322       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0510 19:14:44.026678       1 server.go:516] "Version info" version="v1.33.0"
	I0510 19:14:44.026700       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 19:14:44.044281       1 config.go:199] "Starting service config controller"
	I0510 19:14:44.044302       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0510 19:14:44.044329       1 config.go:105] "Starting endpoint slice config controller"
	I0510 19:14:44.044334       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0510 19:14:44.044350       1 config.go:440] "Starting serviceCIDR config controller"
	I0510 19:14:44.044356       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0510 19:14:44.044393       1 config.go:329] "Starting node config controller"
	I0510 19:14:44.044398       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0510 19:14:44.144942       1 shared_informer.go:357] "Caches are synced" controller="node config"
	I0510 19:14:44.144990       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0510 19:14:44.145026       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0510 19:14:44.145753       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [c9945e09f6b6d261039029f89e168d45dd9fc4acf65b417b637a7704d3cc6df5] <==
	E0510 19:13:02.693457       1 proxier.go:732] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0510 19:13:02.758813       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.39.10"]
	E0510 19:13:02.759264       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0510 19:13:02.813058       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0510 19:13:02.813121       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0510 19:13:02.813154       1 server_linux.go:145] "Using iptables Proxier"
	I0510 19:13:02.823619       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0510 19:13:02.825210       1 server.go:516] "Version info" version="v1.33.0"
	I0510 19:13:02.825246       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 19:13:02.830903       1 config.go:199] "Starting service config controller"
	I0510 19:13:02.831362       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0510 19:13:02.831483       1 config.go:105] "Starting endpoint slice config controller"
	I0510 19:13:02.831573       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0510 19:13:02.831604       1 config.go:440] "Starting serviceCIDR config controller"
	I0510 19:13:02.831609       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0510 19:13:02.836974       1 config.go:329] "Starting node config controller"
	I0510 19:13:02.837043       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0510 19:13:02.931963       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	I0510 19:13:02.932004       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0510 19:13:02.934797       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0510 19:13:02.939205       1 shared_informer.go:357] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [3df19d449ad869e6c3b02da7edbd6ffb58d12f2d727f816ea15867bd4aa08d16] <==
	I0510 19:14:40.405840       1 serving.go:386] Generated self-signed cert in-memory
	W0510 19:14:41.944865       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0510 19:14:41.945157       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0510 19:14:41.945301       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0510 19:14:41.945327       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0510 19:14:42.013382       1 server.go:171] "Starting Kubernetes Scheduler" version="v1.33.0"
	I0510 19:14:42.013480       1 server.go:173] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 19:14:42.016453       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 19:14:42.016527       1 shared_informer.go:350] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 19:14:42.017173       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0510 19:14:42.017686       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0510 19:14:42.116998       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [b96ba0c6867681eb5c3dd0df167dc56dd09ffcb675f8fa26472566e54feb7385] <==
	E0510 19:12:52.728528       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0510 19:12:52.728590       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0510 19:12:52.728664       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0510 19:12:52.730336       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0510 19:12:52.730442       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0510 19:12:52.730507       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0510 19:12:52.730578       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0510 19:12:52.730615       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0510 19:12:52.730681       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0510 19:12:53.544505       1 reflector.go:200] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0510 19:12:53.550114       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0510 19:12:53.624424       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0510 19:12:53.688437       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0510 19:12:53.696486       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0510 19:12:53.741325       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0510 19:12:53.826290       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0510 19:12:53.904254       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0510 19:12:53.968405       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0510 19:12:53.979343       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0510 19:12:53.989836       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0510 19:12:54.009435       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0510 19:12:54.034193       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0510 19:12:54.045091       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I0510 19:12:56.911917       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0510 19:14:25.461949       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 10 19:14:40 pause-317241 kubelet[3473]: E0510 19:14:40.233692    3473 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-317241\" not found" node="pause-317241"
	May 10 19:14:41 pause-317241 kubelet[3473]: E0510 19:14:41.237427    3473 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-317241\" not found" node="pause-317241"
	May 10 19:14:41 pause-317241 kubelet[3473]: E0510 19:14:41.238392    3473 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-317241\" not found" node="pause-317241"
	May 10 19:14:41 pause-317241 kubelet[3473]: E0510 19:14:41.239007    3473 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-317241\" not found" node="pause-317241"
	May 10 19:14:41 pause-317241 kubelet[3473]: I0510 19:14:41.995801    3473 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-317241"
	May 10 19:14:42 pause-317241 kubelet[3473]: I0510 19:14:42.089038    3473 kubelet_node_status.go:124] "Node was previously registered" node="pause-317241"
	May 10 19:14:42 pause-317241 kubelet[3473]: I0510 19:14:42.089243    3473 kubelet_node_status.go:78] "Successfully registered node" node="pause-317241"
	May 10 19:14:42 pause-317241 kubelet[3473]: I0510 19:14:42.089304    3473 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 10 19:14:42 pause-317241 kubelet[3473]: I0510 19:14:42.090649    3473 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 10 19:14:42 pause-317241 kubelet[3473]: E0510 19:14:42.126649    3473 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-317241\" already exists" pod="kube-system/kube-apiserver-pause-317241"
	May 10 19:14:42 pause-317241 kubelet[3473]: I0510 19:14:42.126786    3473 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-317241"
	May 10 19:14:42 pause-317241 kubelet[3473]: E0510 19:14:42.151597    3473 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-317241\" already exists" pod="kube-system/kube-controller-manager-pause-317241"
	May 10 19:14:42 pause-317241 kubelet[3473]: I0510 19:14:42.151861    3473 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-317241"
	May 10 19:14:42 pause-317241 kubelet[3473]: E0510 19:14:42.166899    3473 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-317241\" already exists" pod="kube-system/kube-scheduler-pause-317241"
	May 10 19:14:42 pause-317241 kubelet[3473]: I0510 19:14:42.167085    3473 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-317241"
	May 10 19:14:42 pause-317241 kubelet[3473]: E0510 19:14:42.183352    3473 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"etcd-pause-317241\" already exists" pod="kube-system/etcd-pause-317241"
	May 10 19:14:42 pause-317241 kubelet[3473]: I0510 19:14:42.848797    3473 apiserver.go:52] "Watching apiserver"
	May 10 19:14:42 pause-317241 kubelet[3473]: I0510 19:14:42.895374    3473 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world"
	May 10 19:14:42 pause-317241 kubelet[3473]: I0510 19:14:42.996046    3473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/08543e5b-1085-4de5-9922-16d2a027fb0e-xtables-lock\") pod \"kube-proxy-skvbp\" (UID: \"08543e5b-1085-4de5-9922-16d2a027fb0e\") " pod="kube-system/kube-proxy-skvbp"
	May 10 19:14:42 pause-317241 kubelet[3473]: I0510 19:14:42.996502    3473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/08543e5b-1085-4de5-9922-16d2a027fb0e-lib-modules\") pod \"kube-proxy-skvbp\" (UID: \"08543e5b-1085-4de5-9922-16d2a027fb0e\") " pod="kube-system/kube-proxy-skvbp"
	May 10 19:14:46 pause-317241 kubelet[3473]: I0510 19:14:46.604017    3473 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	May 10 19:14:48 pause-317241 kubelet[3473]: E0510 19:14:48.062061    3473 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746904488061659268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125819,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 19:14:48 pause-317241 kubelet[3473]: E0510 19:14:48.062259    3473 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746904488061659268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125819,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 19:14:58 pause-317241 kubelet[3473]: E0510 19:14:58.064915    3473 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746904498064385504,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125819,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 19:14:58 pause-317241 kubelet[3473]: E0510 19:14:58.064959    3473 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746904498064385504,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125819,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-317241 -n pause-317241
helpers_test.go:261: (dbg) Run:  kubectl --context pause-317241 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (86.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (284.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-089147 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-089147 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m44.444162262s)

                                                
                                                
-- stdout --
	* [old-k8s-version-089147] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20720
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20720-388787/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-388787/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-089147" primary control-plane node in "old-k8s-version-089147" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0510 19:20:48.878285  449945 out.go:345] Setting OutFile to fd 1 ...
	I0510 19:20:48.878627  449945 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 19:20:48.878640  449945 out.go:358] Setting ErrFile to fd 2...
	I0510 19:20:48.878645  449945 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 19:20:48.878877  449945 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-388787/.minikube/bin
	I0510 19:20:48.879565  449945 out.go:352] Setting JSON to false
	I0510 19:20:48.880879  449945 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":32597,"bootTime":1746872252,"procs":305,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 19:20:48.881001  449945 start.go:140] virtualization: kvm guest
	I0510 19:20:48.883631  449945 out.go:177] * [old-k8s-version-089147] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 19:20:48.885246  449945 notify.go:220] Checking for updates...
	I0510 19:20:48.885259  449945 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 19:20:48.887423  449945 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 19:20:48.888809  449945 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-388787/kubeconfig
	I0510 19:20:48.890290  449945 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-388787/.minikube
	I0510 19:20:48.891948  449945 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 19:20:48.893788  449945 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 19:20:48.896196  449945 config.go:182] Loaded profile config "bridge-380533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 19:20:48.896362  449945 config.go:182] Loaded profile config "enable-default-cni-380533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 19:20:48.896538  449945 config.go:182] Loaded profile config "flannel-380533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 19:20:48.896731  449945 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 19:20:48.940038  449945 out.go:177] * Using the kvm2 driver based on user configuration
	I0510 19:20:48.941565  449945 start.go:304] selected driver: kvm2
	I0510 19:20:48.941591  449945 start.go:908] validating driver "kvm2" against <nil>
	I0510 19:20:48.941609  449945 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 19:20:48.942761  449945 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 19:20:48.942882  449945 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20720-388787/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0510 19:20:48.961368  449945 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0510 19:20:48.961438  449945 start_flags.go:311] no existing cluster config was found, will generate one from the flags 
	I0510 19:20:48.961701  449945 start_flags.go:975] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0510 19:20:48.961744  449945 cni.go:84] Creating CNI manager for ""
	I0510 19:20:48.961808  449945 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0510 19:20:48.961822  449945 start_flags.go:320] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0510 19:20:48.961887  449945 start.go:347] cluster config:
	{Name:old-k8s-version-089147 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-089147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 19:20:48.962007  449945 iso.go:125] acquiring lock: {Name:mk19640015999219180c6685480547adf0c02201 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 19:20:48.964188  449945 out.go:177] * Starting "old-k8s-version-089147" primary control-plane node in "old-k8s-version-089147" cluster
	I0510 19:20:48.965646  449945 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0510 19:20:48.965715  449945 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0510 19:20:48.965732  449945 cache.go:56] Caching tarball of preloaded images
	I0510 19:20:48.965858  449945 preload.go:172] Found /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0510 19:20:48.965874  449945 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0510 19:20:48.966020  449945 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/config.json ...
	I0510 19:20:48.966054  449945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/config.json: {Name:mk2bd57292fc9d988920da17952f9982e28bbca1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:20:48.966270  449945 start.go:360] acquireMachinesLock for old-k8s-version-089147: {Name:mk11499d7756d503a7a24339ad1a7f9ab9dc0fab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0510 19:20:51.490051  449945 start.go:364] duration metric: took 2.523725902s to acquireMachinesLock for "old-k8s-version-089147"
	I0510 19:20:51.490134  449945 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-089147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-089147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0510 19:20:51.490351  449945 start.go:125] createHost starting for "" (driver="kvm2")
	I0510 19:20:51.492411  449945 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0510 19:20:51.492751  449945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:20:51.492818  449945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:20:51.514873  449945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34507
	I0510 19:20:51.515494  449945 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:20:51.516136  449945 main.go:141] libmachine: Using API Version  1
	I0510 19:20:51.516164  449945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:20:51.516571  449945 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:20:51.516831  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetMachineName
	I0510 19:20:51.517055  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .DriverName
	I0510 19:20:51.517390  449945 start.go:159] libmachine.API.Create for "old-k8s-version-089147" (driver="kvm2")
	I0510 19:20:51.517432  449945 client.go:168] LocalClient.Create starting
	I0510 19:20:51.517488  449945 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem
	I0510 19:20:51.517541  449945 main.go:141] libmachine: Decoding PEM data...
	I0510 19:20:51.517566  449945 main.go:141] libmachine: Parsing certificate...
	I0510 19:20:51.517642  449945 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem
	I0510 19:20:51.517669  449945 main.go:141] libmachine: Decoding PEM data...
	I0510 19:20:51.517692  449945 main.go:141] libmachine: Parsing certificate...
	I0510 19:20:51.517715  449945 main.go:141] libmachine: Running pre-create checks...
	I0510 19:20:51.517727  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .PreCreateCheck
	I0510 19:20:51.518317  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetConfigRaw
	I0510 19:20:51.518967  449945 main.go:141] libmachine: Creating machine...
	I0510 19:20:51.518999  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .Create
	I0510 19:20:51.519194  449945 main.go:141] libmachine: (old-k8s-version-089147) creating KVM machine...
	I0510 19:20:51.519216  449945 main.go:141] libmachine: (old-k8s-version-089147) creating network...
	I0510 19:20:51.520679  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | found existing default KVM network
	I0510 19:20:51.521687  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:20:51.521511  449969 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:9a:36:23} reservation:<nil>}
	I0510 19:20:51.522851  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:20:51.522766  449969 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000205230}
	I0510 19:20:51.522981  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | created network xml: 
	I0510 19:20:51.523004  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | <network>
	I0510 19:20:51.523016  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG |   <name>mk-old-k8s-version-089147</name>
	I0510 19:20:51.523031  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG |   <dns enable='no'/>
	I0510 19:20:51.523040  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG |   
	I0510 19:20:51.523049  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0510 19:20:51.523066  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG |     <dhcp>
	I0510 19:20:51.523076  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0510 19:20:51.523083  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG |     </dhcp>
	I0510 19:20:51.523088  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG |   </ip>
	I0510 19:20:51.523095  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG |   
	I0510 19:20:51.523101  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | </network>
	I0510 19:20:51.523117  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | 
	I0510 19:20:51.529050  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | trying to create private KVM network mk-old-k8s-version-089147 192.168.50.0/24...
	I0510 19:20:51.632784  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | private KVM network mk-old-k8s-version-089147 192.168.50.0/24 created
	I0510 19:20:51.632819  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:20:51.632765  449969 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20720-388787/.minikube
	I0510 19:20:51.632832  449945 main.go:141] libmachine: (old-k8s-version-089147) setting up store path in /home/jenkins/minikube-integration/20720-388787/.minikube/machines/old-k8s-version-089147 ...
	I0510 19:20:51.632864  449945 main.go:141] libmachine: (old-k8s-version-089147) building disk image from file:///home/jenkins/minikube-integration/20720-388787/.minikube/cache/iso/amd64/minikube-v1.35.0-1746739450-20720-amd64.iso
	I0510 19:20:51.632969  449945 main.go:141] libmachine: (old-k8s-version-089147) Downloading /home/jenkins/minikube-integration/20720-388787/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20720-388787/.minikube/cache/iso/amd64/minikube-v1.35.0-1746739450-20720-amd64.iso...
	I0510 19:20:51.972183  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:20:51.972004  449969 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/old-k8s-version-089147/id_rsa...
	I0510 19:20:52.158307  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:20:52.158154  449969 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/old-k8s-version-089147/old-k8s-version-089147.rawdisk...
	I0510 19:20:52.158341  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | Writing magic tar header
	I0510 19:20:52.158375  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | Writing SSH key tar header
	I0510 19:20:52.158392  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:20:52.158311  449969 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20720-388787/.minikube/machines/old-k8s-version-089147 ...
	I0510 19:20:52.158500  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/old-k8s-version-089147
	I0510 19:20:52.158536  449945 main.go:141] libmachine: (old-k8s-version-089147) setting executable bit set on /home/jenkins/minikube-integration/20720-388787/.minikube/machines/old-k8s-version-089147 (perms=drwx------)
	I0510 19:20:52.158554  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20720-388787/.minikube/machines
	I0510 19:20:52.158574  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20720-388787/.minikube
	I0510 19:20:52.158587  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20720-388787
	I0510 19:20:52.158599  449945 main.go:141] libmachine: (old-k8s-version-089147) setting executable bit set on /home/jenkins/minikube-integration/20720-388787/.minikube/machines (perms=drwxr-xr-x)
	I0510 19:20:52.158649  449945 main.go:141] libmachine: (old-k8s-version-089147) setting executable bit set on /home/jenkins/minikube-integration/20720-388787/.minikube (perms=drwxr-xr-x)
	I0510 19:20:52.158672  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0510 19:20:52.158682  449945 main.go:141] libmachine: (old-k8s-version-089147) setting executable bit set on /home/jenkins/minikube-integration/20720-388787 (perms=drwxrwxr-x)
	I0510 19:20:52.158693  449945 main.go:141] libmachine: (old-k8s-version-089147) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0510 19:20:52.158701  449945 main.go:141] libmachine: (old-k8s-version-089147) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0510 19:20:52.158709  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | checking permissions on dir: /home/jenkins
	I0510 19:20:52.158715  449945 main.go:141] libmachine: (old-k8s-version-089147) creating domain...
	I0510 19:20:52.158727  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | checking permissions on dir: /home
	I0510 19:20:52.158733  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | skipping /home - not owner
	I0510 19:20:52.160035  449945 main.go:141] libmachine: (old-k8s-version-089147) define libvirt domain using xml: 
	I0510 19:20:52.160065  449945 main.go:141] libmachine: (old-k8s-version-089147) <domain type='kvm'>
	I0510 19:20:52.160076  449945 main.go:141] libmachine: (old-k8s-version-089147)   <name>old-k8s-version-089147</name>
	I0510 19:20:52.160089  449945 main.go:141] libmachine: (old-k8s-version-089147)   <memory unit='MiB'>2200</memory>
	I0510 19:20:52.160098  449945 main.go:141] libmachine: (old-k8s-version-089147)   <vcpu>2</vcpu>
	I0510 19:20:52.160105  449945 main.go:141] libmachine: (old-k8s-version-089147)   <features>
	I0510 19:20:52.160113  449945 main.go:141] libmachine: (old-k8s-version-089147)     <acpi/>
	I0510 19:20:52.160136  449945 main.go:141] libmachine: (old-k8s-version-089147)     <apic/>
	I0510 19:20:52.160149  449945 main.go:141] libmachine: (old-k8s-version-089147)     <pae/>
	I0510 19:20:52.160154  449945 main.go:141] libmachine: (old-k8s-version-089147)     
	I0510 19:20:52.160164  449945 main.go:141] libmachine: (old-k8s-version-089147)   </features>
	I0510 19:20:52.160176  449945 main.go:141] libmachine: (old-k8s-version-089147)   <cpu mode='host-passthrough'>
	I0510 19:20:52.160185  449945 main.go:141] libmachine: (old-k8s-version-089147)   
	I0510 19:20:52.160195  449945 main.go:141] libmachine: (old-k8s-version-089147)   </cpu>
	I0510 19:20:52.160203  449945 main.go:141] libmachine: (old-k8s-version-089147)   <os>
	I0510 19:20:52.160213  449945 main.go:141] libmachine: (old-k8s-version-089147)     <type>hvm</type>
	I0510 19:20:52.160222  449945 main.go:141] libmachine: (old-k8s-version-089147)     <boot dev='cdrom'/>
	I0510 19:20:52.160231  449945 main.go:141] libmachine: (old-k8s-version-089147)     <boot dev='hd'/>
	I0510 19:20:52.160240  449945 main.go:141] libmachine: (old-k8s-version-089147)     <bootmenu enable='no'/>
	I0510 19:20:52.160248  449945 main.go:141] libmachine: (old-k8s-version-089147)   </os>
	I0510 19:20:52.160258  449945 main.go:141] libmachine: (old-k8s-version-089147)   <devices>
	I0510 19:20:52.160270  449945 main.go:141] libmachine: (old-k8s-version-089147)     <disk type='file' device='cdrom'>
	I0510 19:20:52.160284  449945 main.go:141] libmachine: (old-k8s-version-089147)       <source file='/home/jenkins/minikube-integration/20720-388787/.minikube/machines/old-k8s-version-089147/boot2docker.iso'/>
	I0510 19:20:52.160297  449945 main.go:141] libmachine: (old-k8s-version-089147)       <target dev='hdc' bus='scsi'/>
	I0510 19:20:52.160310  449945 main.go:141] libmachine: (old-k8s-version-089147)       <readonly/>
	I0510 19:20:52.160320  449945 main.go:141] libmachine: (old-k8s-version-089147)     </disk>
	I0510 19:20:52.160339  449945 main.go:141] libmachine: (old-k8s-version-089147)     <disk type='file' device='disk'>
	I0510 19:20:52.160352  449945 main.go:141] libmachine: (old-k8s-version-089147)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0510 19:20:52.160371  449945 main.go:141] libmachine: (old-k8s-version-089147)       <source file='/home/jenkins/minikube-integration/20720-388787/.minikube/machines/old-k8s-version-089147/old-k8s-version-089147.rawdisk'/>
	I0510 19:20:52.160380  449945 main.go:141] libmachine: (old-k8s-version-089147)       <target dev='hda' bus='virtio'/>
	I0510 19:20:52.160391  449945 main.go:141] libmachine: (old-k8s-version-089147)     </disk>
	I0510 19:20:52.160402  449945 main.go:141] libmachine: (old-k8s-version-089147)     <interface type='network'>
	I0510 19:20:52.160413  449945 main.go:141] libmachine: (old-k8s-version-089147)       <source network='mk-old-k8s-version-089147'/>
	I0510 19:20:52.160443  449945 main.go:141] libmachine: (old-k8s-version-089147)       <model type='virtio'/>
	I0510 19:20:52.160454  449945 main.go:141] libmachine: (old-k8s-version-089147)     </interface>
	I0510 19:20:52.160462  449945 main.go:141] libmachine: (old-k8s-version-089147)     <interface type='network'>
	I0510 19:20:52.160473  449945 main.go:141] libmachine: (old-k8s-version-089147)       <source network='default'/>
	I0510 19:20:52.160485  449945 main.go:141] libmachine: (old-k8s-version-089147)       <model type='virtio'/>
	I0510 19:20:52.160497  449945 main.go:141] libmachine: (old-k8s-version-089147)     </interface>
	I0510 19:20:52.160507  449945 main.go:141] libmachine: (old-k8s-version-089147)     <serial type='pty'>
	I0510 19:20:52.160517  449945 main.go:141] libmachine: (old-k8s-version-089147)       <target port='0'/>
	I0510 19:20:52.160527  449945 main.go:141] libmachine: (old-k8s-version-089147)     </serial>
	I0510 19:20:52.160539  449945 main.go:141] libmachine: (old-k8s-version-089147)     <console type='pty'>
	I0510 19:20:52.160550  449945 main.go:141] libmachine: (old-k8s-version-089147)       <target type='serial' port='0'/>
	I0510 19:20:52.160562  449945 main.go:141] libmachine: (old-k8s-version-089147)     </console>
	I0510 19:20:52.160572  449945 main.go:141] libmachine: (old-k8s-version-089147)     <rng model='virtio'>
	I0510 19:20:52.160583  449945 main.go:141] libmachine: (old-k8s-version-089147)       <backend model='random'>/dev/random</backend>
	I0510 19:20:52.160593  449945 main.go:141] libmachine: (old-k8s-version-089147)     </rng>
	I0510 19:20:52.160602  449945 main.go:141] libmachine: (old-k8s-version-089147)     
	I0510 19:20:52.160629  449945 main.go:141] libmachine: (old-k8s-version-089147)     
	I0510 19:20:52.160642  449945 main.go:141] libmachine: (old-k8s-version-089147)   </devices>
	I0510 19:20:52.160650  449945 main.go:141] libmachine: (old-k8s-version-089147) </domain>
	I0510 19:20:52.160664  449945 main.go:141] libmachine: (old-k8s-version-089147) 
	I0510 19:20:52.165294  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:ed:4c:2f in network default
	I0510 19:20:52.166019  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:20:52.166041  449945 main.go:141] libmachine: (old-k8s-version-089147) starting domain...
	I0510 19:20:52.166059  449945 main.go:141] libmachine: (old-k8s-version-089147) ensuring networks are active...
	I0510 19:20:52.166972  449945 main.go:141] libmachine: (old-k8s-version-089147) Ensuring network default is active
	I0510 19:20:52.167502  449945 main.go:141] libmachine: (old-k8s-version-089147) Ensuring network mk-old-k8s-version-089147 is active
	I0510 19:20:52.168294  449945 main.go:141] libmachine: (old-k8s-version-089147) getting domain XML...
	I0510 19:20:52.169129  449945 main.go:141] libmachine: (old-k8s-version-089147) creating domain...
	I0510 19:20:53.653091  449945 main.go:141] libmachine: (old-k8s-version-089147) waiting for IP...
	I0510 19:20:53.653824  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:20:53.654389  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | unable to find current IP address of domain old-k8s-version-089147 in network mk-old-k8s-version-089147
	I0510 19:20:53.654517  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:20:53.654412  449969 retry.go:31] will retry after 231.968436ms: waiting for domain to come up
	I0510 19:20:53.888316  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:20:53.889107  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | unable to find current IP address of domain old-k8s-version-089147 in network mk-old-k8s-version-089147
	I0510 19:20:53.889153  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:20:53.889071  449969 retry.go:31] will retry after 253.257147ms: waiting for domain to come up
	I0510 19:20:54.143740  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:20:54.144295  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | unable to find current IP address of domain old-k8s-version-089147 in network mk-old-k8s-version-089147
	I0510 19:20:54.144326  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:20:54.144271  449969 retry.go:31] will retry after 402.096936ms: waiting for domain to come up
	I0510 19:20:54.547963  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:20:54.548556  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | unable to find current IP address of domain old-k8s-version-089147 in network mk-old-k8s-version-089147
	I0510 19:20:54.548583  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:20:54.548535  449969 retry.go:31] will retry after 492.954609ms: waiting for domain to come up
	I0510 19:20:55.043631  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:20:55.044664  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | unable to find current IP address of domain old-k8s-version-089147 in network mk-old-k8s-version-089147
	I0510 19:20:55.044696  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:20:55.044612  449969 retry.go:31] will retry after 525.824276ms: waiting for domain to come up
	I0510 19:20:55.572483  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:20:55.573300  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | unable to find current IP address of domain old-k8s-version-089147 in network mk-old-k8s-version-089147
	I0510 19:20:55.573373  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:20:55.573252  449969 retry.go:31] will retry after 612.35236ms: waiting for domain to come up
	I0510 19:20:56.188067  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:20:56.188783  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | unable to find current IP address of domain old-k8s-version-089147 in network mk-old-k8s-version-089147
	I0510 19:20:56.188819  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:20:56.188673  449969 retry.go:31] will retry after 1.146096376s: waiting for domain to come up
	I0510 19:20:57.336690  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:20:57.337295  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | unable to find current IP address of domain old-k8s-version-089147 in network mk-old-k8s-version-089147
	I0510 19:20:57.337325  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:20:57.337274  449969 retry.go:31] will retry after 894.20735ms: waiting for domain to come up
	I0510 19:20:58.233383  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:20:58.234052  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | unable to find current IP address of domain old-k8s-version-089147 in network mk-old-k8s-version-089147
	I0510 19:20:58.234080  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:20:58.234013  449969 retry.go:31] will retry after 1.733145712s: waiting for domain to come up
	I0510 19:20:59.968387  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:20:59.968991  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | unable to find current IP address of domain old-k8s-version-089147 in network mk-old-k8s-version-089147
	I0510 19:20:59.969039  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:20:59.968973  449969 retry.go:31] will retry after 1.493771314s: waiting for domain to come up
	I0510 19:21:01.464484  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:01.464968  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | unable to find current IP address of domain old-k8s-version-089147 in network mk-old-k8s-version-089147
	I0510 19:21:01.465010  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:21:01.464954  449969 retry.go:31] will retry after 2.679980544s: waiting for domain to come up
	I0510 19:21:04.146797  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:04.147608  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | unable to find current IP address of domain old-k8s-version-089147 in network mk-old-k8s-version-089147
	I0510 19:21:04.147633  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:21:04.147500  449969 retry.go:31] will retry after 2.50424137s: waiting for domain to come up
	I0510 19:21:06.652996  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:06.653668  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | unable to find current IP address of domain old-k8s-version-089147 in network mk-old-k8s-version-089147
	I0510 19:21:06.653745  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:21:06.653638  449969 retry.go:31] will retry after 4.281861018s: waiting for domain to come up
	I0510 19:21:10.940117  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:10.940735  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | unable to find current IP address of domain old-k8s-version-089147 in network mk-old-k8s-version-089147
	I0510 19:21:10.940757  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:21:10.940688  449969 retry.go:31] will retry after 3.85851524s: waiting for domain to come up
	I0510 19:21:14.800702  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:14.801289  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | unable to find current IP address of domain old-k8s-version-089147 in network mk-old-k8s-version-089147
	I0510 19:21:14.801339  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:21:14.801286  449969 retry.go:31] will retry after 5.486250873s: waiting for domain to come up
	I0510 19:21:20.290661  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:20.291375  449945 main.go:141] libmachine: (old-k8s-version-089147) found domain IP: 192.168.50.225
	I0510 19:21:20.291393  449945 main.go:141] libmachine: (old-k8s-version-089147) reserving static IP address...
	I0510 19:21:20.291414  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has current primary IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:20.291774  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-089147", mac: "52:54:00:c5:c6:86", ip: "192.168.50.225"} in network mk-old-k8s-version-089147
	I0510 19:21:20.401205  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | Getting to WaitForSSH function...
	I0510 19:21:20.401240  449945 main.go:141] libmachine: (old-k8s-version-089147) reserved static IP address 192.168.50.225 for domain old-k8s-version-089147
	I0510 19:21:20.401248  449945 main.go:141] libmachine: (old-k8s-version-089147) waiting for SSH...
	I0510 19:21:20.404521  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:20.405262  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:21:09 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c5:c6:86}
	I0510 19:21:20.405288  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:20.405980  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | Using SSH client type: external
	I0510 19:21:20.406002  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | Using SSH private key: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/old-k8s-version-089147/id_rsa (-rw-------)
	I0510 19:21:20.406033  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.225 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20720-388787/.minikube/machines/old-k8s-version-089147/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0510 19:21:20.406041  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | About to run SSH command:
	I0510 19:21:20.406055  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | exit 0
	I0510 19:21:20.560566  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | SSH cmd err, output: <nil>: 
	I0510 19:21:20.560822  449945 main.go:141] libmachine: (old-k8s-version-089147) KVM machine creation complete
	I0510 19:21:20.561170  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetConfigRaw
	I0510 19:21:20.562394  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .DriverName
	I0510 19:21:20.562595  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .DriverName
	I0510 19:21:20.563034  449945 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0510 19:21:20.563049  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetState
	I0510 19:21:20.564648  449945 main.go:141] libmachine: Detecting operating system of created instance...
	I0510 19:21:20.564679  449945 main.go:141] libmachine: Waiting for SSH to be available...
	I0510 19:21:20.564688  449945 main.go:141] libmachine: Getting to WaitForSSH function...
	I0510 19:21:20.564716  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:21:20.568429  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:20.568955  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:21:09 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:21:20.568992  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:20.569238  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:21:20.569459  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:21:20.569666  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:21:20.569827  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:21:20.569993  449945 main.go:141] libmachine: Using SSH client type: native
	I0510 19:21:20.570336  449945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.225 22 <nil> <nil>}
	I0510 19:21:20.570351  449945 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0510 19:21:20.715385  449945 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0510 19:21:20.715409  449945 main.go:141] libmachine: Detecting the provisioner...
	I0510 19:21:20.715418  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:21:20.719032  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:20.719400  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:21:09 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:21:20.719431  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:20.719703  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:21:20.719902  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:21:20.720043  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:21:20.720180  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:21:20.720292  449945 main.go:141] libmachine: Using SSH client type: native
	I0510 19:21:20.720492  449945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.225 22 <nil> <nil>}
	I0510 19:21:20.720498  449945 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0510 19:21:20.854194  449945 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2024.11.2-dirty
	ID=buildroot
	VERSION_ID=2024.11.2
	PRETTY_NAME="Buildroot 2024.11.2"
	
	I0510 19:21:20.854290  449945 main.go:141] libmachine: found compatible host: buildroot
	I0510 19:21:20.854303  449945 main.go:141] libmachine: Provisioning with buildroot...
	I0510 19:21:20.854313  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetMachineName
	I0510 19:21:20.854610  449945 buildroot.go:166] provisioning hostname "old-k8s-version-089147"
	I0510 19:21:20.854642  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetMachineName
	I0510 19:21:20.854958  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:21:20.858793  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:20.859292  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:21:09 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:21:20.859317  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:20.859590  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:21:20.859821  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:21:20.860017  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:21:20.860173  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:21:20.860345  449945 main.go:141] libmachine: Using SSH client type: native
	I0510 19:21:20.860741  449945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.225 22 <nil> <nil>}
	I0510 19:21:20.860793  449945 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-089147 && echo "old-k8s-version-089147" | sudo tee /etc/hostname
	I0510 19:21:21.021255  449945 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-089147
	
	I0510 19:21:21.021317  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:21:21.025238  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:21.025828  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:21:09 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:21:21.025867  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:21.026180  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:21:21.026415  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:21:21.026609  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:21:21.026803  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:21:21.027065  449945 main.go:141] libmachine: Using SSH client type: native
	I0510 19:21:21.027423  449945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.225 22 <nil> <nil>}
	I0510 19:21:21.027454  449945 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-089147' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-089147/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-089147' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0510 19:21:21.192009  449945 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0510 19:21:21.192053  449945 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20720-388787/.minikube CaCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20720-388787/.minikube}
	I0510 19:21:21.192119  449945 buildroot.go:174] setting up certificates
	I0510 19:21:21.192148  449945 provision.go:84] configureAuth start
	I0510 19:21:21.192171  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetMachineName
	I0510 19:21:21.193129  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetIP
	I0510 19:21:21.196589  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:21.197032  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:21:09 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:21:21.197061  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:21.197333  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:21:21.209484  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:21.210072  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:21:09 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:21:21.210099  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:21.210309  449945 provision.go:143] copyHostCerts
	I0510 19:21:21.210389  449945 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem, removing ...
	I0510 19:21:21.210414  449945 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem
	I0510 19:21:21.210484  449945 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem (1078 bytes)
	I0510 19:21:21.210621  449945 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem, removing ...
	I0510 19:21:21.210633  449945 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem
	I0510 19:21:21.210666  449945 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem (1123 bytes)
	I0510 19:21:21.210754  449945 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem, removing ...
	I0510 19:21:21.210766  449945 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem
	I0510 19:21:21.210803  449945 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem (1675 bytes)
	I0510 19:21:21.210872  449945 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-089147 san=[127.0.0.1 192.168.50.225 localhost minikube old-k8s-version-089147]
	I0510 19:21:21.671962  449945 provision.go:177] copyRemoteCerts
	I0510 19:21:21.672047  449945 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0510 19:21:21.672081  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:21:21.682374  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:21.682855  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:21:09 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:21:21.682898  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:21.683530  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:21:21.683848  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:21:21.684055  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:21:21.684243  449945 sshutil.go:53] new ssh client: &{IP:192.168.50.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/old-k8s-version-089147/id_rsa Username:docker}
	I0510 19:21:21.791515  449945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0510 19:21:21.837059  449945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0510 19:21:21.874330  449945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0510 19:21:21.917803  449945 provision.go:87] duration metric: took 725.633631ms to configureAuth
	I0510 19:21:21.917840  449945 buildroot.go:189] setting minikube options for container-runtime
	I0510 19:21:21.918094  449945 config.go:182] Loaded profile config "old-k8s-version-089147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0510 19:21:21.918217  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:21:21.921692  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:21.922245  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:21:09 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:21:21.922288  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:21.922537  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:21:21.922766  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:21:21.922961  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:21:21.923152  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:21:21.923421  449945 main.go:141] libmachine: Using SSH client type: native
	I0510 19:21:21.923733  449945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.225 22 <nil> <nil>}
	I0510 19:21:21.923760  449945 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0510 19:21:22.227497  449945 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0510 19:21:22.227525  449945 main.go:141] libmachine: Checking connection to Docker...
	I0510 19:21:22.227543  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetURL
	I0510 19:21:22.229469  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | using libvirt version 6000000
	I0510 19:21:22.232016  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:22.234322  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:21:09 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:21:22.234342  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:22.234726  449945 main.go:141] libmachine: Docker is up and running!
	I0510 19:21:22.234739  449945 main.go:141] libmachine: Reticulating splines...
	I0510 19:21:22.234748  449945 client.go:171] duration metric: took 30.717306672s to LocalClient.Create
	I0510 19:21:22.234769  449945 start.go:167] duration metric: took 30.717380432s to libmachine.API.Create "old-k8s-version-089147"
	I0510 19:21:22.234781  449945 start.go:293] postStartSetup for "old-k8s-version-089147" (driver="kvm2")
	I0510 19:21:22.234799  449945 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0510 19:21:22.234822  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .DriverName
	I0510 19:21:22.235016  449945 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0510 19:21:22.235036  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:21:22.238014  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:22.238386  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:21:09 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:21:22.238419  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:22.238561  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:21:22.238804  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:21:22.239005  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:21:22.239176  449945 sshutil.go:53] new ssh client: &{IP:192.168.50.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/old-k8s-version-089147/id_rsa Username:docker}
	I0510 19:21:22.341394  449945 ssh_runner.go:195] Run: cat /etc/os-release
	I0510 19:21:22.349448  449945 info.go:137] Remote host: Buildroot 2024.11.2
	I0510 19:21:22.349486  449945 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-388787/.minikube/addons for local assets ...
	I0510 19:21:22.349575  449945 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-388787/.minikube/files for local assets ...
	I0510 19:21:22.349704  449945 filesync.go:149] local asset: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem -> 3959802.pem in /etc/ssl/certs
	I0510 19:21:22.349845  449945 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0510 19:21:22.366993  449945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem --> /etc/ssl/certs/3959802.pem (1708 bytes)
	I0510 19:21:22.405786  449945 start.go:296] duration metric: took 170.979343ms for postStartSetup
	I0510 19:21:22.405858  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetConfigRaw
	I0510 19:21:22.406705  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetIP
	I0510 19:21:22.410530  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:22.411122  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:21:09 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:21:22.411144  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:22.411759  449945 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/config.json ...
	I0510 19:21:22.412087  449945 start.go:128] duration metric: took 30.92170916s to createHost
	I0510 19:21:22.412131  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:21:22.415152  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:22.415763  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:21:09 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:21:22.415797  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:22.416129  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:21:22.416344  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:21:22.416550  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:21:22.416723  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:21:22.417046  449945 main.go:141] libmachine: Using SSH client type: native
	I0510 19:21:22.417348  449945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.225 22 <nil> <nil>}
	I0510 19:21:22.417371  449945 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0510 19:21:22.537739  449945 main.go:141] libmachine: SSH cmd err, output: <nil>: 1746904882.525586093
	
	I0510 19:21:22.537777  449945 fix.go:216] guest clock: 1746904882.525586093
	I0510 19:21:22.537801  449945 fix.go:229] Guest: 2025-05-10 19:21:22.525586093 +0000 UTC Remote: 2025-05-10 19:21:22.412109791 +0000 UTC m=+33.580148420 (delta=113.476302ms)
	I0510 19:21:22.537841  449945 fix.go:200] guest clock delta is within tolerance: 113.476302ms
	I0510 19:21:22.537855  449945 start.go:83] releasing machines lock for "old-k8s-version-089147", held for 31.047765142s
	I0510 19:21:22.537892  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .DriverName
	I0510 19:21:22.538173  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetIP
	I0510 19:21:22.541869  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:22.542416  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:21:09 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:21:22.542446  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:22.542850  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .DriverName
	I0510 19:21:22.543509  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .DriverName
	I0510 19:21:22.543748  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .DriverName
	I0510 19:21:22.543882  449945 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0510 19:21:22.543931  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:21:22.544002  449945 ssh_runner.go:195] Run: cat /version.json
	I0510 19:21:22.544034  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:21:22.550348  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:22.550390  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:22.550421  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:21:09 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:21:22.550449  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:22.550620  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:21:09 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:21:22.550648  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:22.550674  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:21:22.550824  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:21:22.551036  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:21:22.551037  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:21:22.551315  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:21:22.551321  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:21:22.551698  449945 sshutil.go:53] new ssh client: &{IP:192.168.50.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/old-k8s-version-089147/id_rsa Username:docker}
	I0510 19:21:22.551720  449945 sshutil.go:53] new ssh client: &{IP:192.168.50.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/old-k8s-version-089147/id_rsa Username:docker}
	I0510 19:21:22.653934  449945 ssh_runner.go:195] Run: systemctl --version
	I0510 19:21:22.661143  449945 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0510 19:21:22.838744  449945 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0510 19:21:22.846641  449945 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0510 19:21:22.846715  449945 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0510 19:21:22.870438  449945 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0510 19:21:22.870478  449945 start.go:495] detecting cgroup driver to use...
	I0510 19:21:22.870566  449945 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0510 19:21:22.891892  449945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0510 19:21:22.912835  449945 docker.go:225] disabling cri-docker service (if available) ...
	I0510 19:21:22.912914  449945 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0510 19:21:22.934541  449945 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0510 19:21:22.955586  449945 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0510 19:21:23.151737  449945 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0510 19:21:23.335401  449945 docker.go:241] disabling docker service ...
	I0510 19:21:23.335466  449945 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0510 19:21:23.352797  449945 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0510 19:21:23.368070  449945 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0510 19:21:23.577409  449945 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0510 19:21:23.744843  449945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0510 19:21:23.763093  449945 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0510 19:21:23.791682  449945 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0510 19:21:23.791744  449945 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:21:23.805396  449945 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0510 19:21:23.805486  449945 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:21:23.819170  449945 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:21:23.831857  449945 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:21:23.845315  449945 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0510 19:21:23.860220  449945 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0510 19:21:23.874517  449945 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0510 19:21:23.874593  449945 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0510 19:21:23.895067  449945 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0510 19:21:23.909239  449945 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 19:21:24.096028  449945 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0510 19:21:24.240197  449945 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0510 19:21:24.240276  449945 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0510 19:21:24.246397  449945 start.go:563] Will wait 60s for crictl version
	I0510 19:21:24.246465  449945 ssh_runner.go:195] Run: which crictl
	I0510 19:21:24.251291  449945 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0510 19:21:24.302952  449945 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0510 19:21:24.303058  449945 ssh_runner.go:195] Run: crio --version
	I0510 19:21:24.344222  449945 ssh_runner.go:195] Run: crio --version
	I0510 19:21:24.382142  449945 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0510 19:21:24.383482  449945 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetIP
	I0510 19:21:24.386339  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:24.386651  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:21:09 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:21:24.386700  449945 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:21:24.386998  449945 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0510 19:21:24.391659  449945 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 19:21:24.410346  449945 kubeadm.go:875] updating cluster {Name:old-k8s-version-089147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-089147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.225 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0510 19:21:24.410605  449945 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0510 19:21:24.410697  449945 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 19:21:24.455967  449945 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0510 19:21:24.456054  449945 ssh_runner.go:195] Run: which lz4
	I0510 19:21:24.461984  449945 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0510 19:21:24.467758  449945 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0510 19:21:24.467806  449945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0510 19:21:26.355840  449945 crio.go:462] duration metric: took 1.893909907s to copy over tarball
	I0510 19:21:26.355928  449945 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0510 19:21:29.170739  449945 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.814779516s)
	I0510 19:21:29.170772  449945 crio.go:469] duration metric: took 2.814895923s to extract the tarball
	I0510 19:21:29.170781  449945 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0510 19:21:29.226017  449945 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 19:21:29.304587  449945 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0510 19:21:29.304626  449945 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0510 19:21:29.304742  449945 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0510 19:21:29.304778  449945 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:21:29.304825  449945 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:21:29.304822  449945 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0510 19:21:29.305065  449945 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:21:29.305101  449945 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0510 19:21:29.305103  449945 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0510 19:21:29.304748  449945 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:21:29.307115  449945 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0510 19:21:29.307149  449945 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0510 19:21:29.307152  449945 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:21:29.307115  449945 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:21:29.307119  449945 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:21:29.307193  449945 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:21:29.307195  449945 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0510 19:21:29.307117  449945 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0510 19:21:29.453533  449945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:21:29.457142  449945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:21:29.460352  449945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0510 19:21:29.474804  449945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0510 19:21:29.498395  449945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:21:29.499964  449945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0510 19:21:29.522887  449945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:21:29.594937  449945 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0510 19:21:29.594994  449945 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:21:29.595045  449945 ssh_runner.go:195] Run: which crictl
	I0510 19:21:29.601399  449945 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0510 19:21:29.601449  449945 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:21:29.601505  449945 ssh_runner.go:195] Run: which crictl
	I0510 19:21:29.657430  449945 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0510 19:21:29.657475  449945 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0510 19:21:29.657527  449945 ssh_runner.go:195] Run: which crictl
	I0510 19:21:29.657590  449945 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0510 19:21:29.657602  449945 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0510 19:21:29.657642  449945 ssh_runner.go:195] Run: which crictl
	I0510 19:21:29.721459  449945 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0510 19:21:29.721507  449945 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:21:29.721513  449945 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0510 19:21:29.721545  449945 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0510 19:21:29.721554  449945 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0510 19:21:29.721587  449945 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:21:29.721594  449945 ssh_runner.go:195] Run: which crictl
	I0510 19:21:29.721611  449945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:21:29.721621  449945 ssh_runner.go:195] Run: which crictl
	I0510 19:21:29.721565  449945 ssh_runner.go:195] Run: which crictl
	I0510 19:21:29.721660  449945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:21:29.721709  449945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0510 19:21:29.721717  449945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0510 19:21:29.747111  449945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:21:29.874737  449945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0510 19:21:29.874806  449945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:21:29.890193  449945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:21:29.890386  449945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0510 19:21:29.890476  449945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0510 19:21:29.890606  449945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:21:30.041497  449945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:21:30.054683  449945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0510 19:21:30.054840  449945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:21:30.054932  449945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:21:30.101091  449945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0510 19:21:30.101244  449945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:21:30.101246  449945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0510 19:21:30.194579  449945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:21:30.273791  449945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0510 19:21:30.273918  449945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0510 19:21:30.274200  449945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:21:30.299228  449945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0510 19:21:30.305783  449945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0510 19:21:30.305809  449945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0510 19:21:30.305813  449945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0510 19:21:30.392830  449945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0510 19:21:30.412826  449945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0510 19:21:30.527050  449945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0510 19:21:30.527114  449945 cache_images.go:92] duration metric: took 1.222467308s to LoadCachedImages
	W0510 19:21:30.527206  449945 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0510 19:21:30.527223  449945 kubeadm.go:926] updating node { 192.168.50.225 8443 v1.20.0 crio true true} ...
	I0510 19:21:30.527377  449945 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-089147 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-089147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0510 19:21:30.527499  449945 ssh_runner.go:195] Run: crio config
	I0510 19:21:30.587847  449945 cni.go:84] Creating CNI manager for ""
	I0510 19:21:30.587873  449945 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0510 19:21:30.587883  449945 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0510 19:21:30.587903  449945 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.225 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-089147 NodeName:old-k8s-version-089147 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.225"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.225 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0510 19:21:30.588062  449945 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.225
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-089147"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.225
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.225"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0510 19:21:30.588141  449945 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0510 19:21:30.603817  449945 binaries.go:44] Found k8s binaries, skipping transfer
	I0510 19:21:30.603882  449945 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0510 19:21:30.619865  449945 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0510 19:21:30.644415  449945 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0510 19:21:30.669208  449945 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0510 19:21:30.693368  449945 ssh_runner.go:195] Run: grep 192.168.50.225	control-plane.minikube.internal$ /etc/hosts
	I0510 19:21:30.698409  449945 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.225	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 19:21:30.717734  449945 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 19:21:30.860444  449945 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 19:21:30.906143  449945 certs.go:68] Setting up /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147 for IP: 192.168.50.225
	I0510 19:21:30.906169  449945 certs.go:194] generating shared ca certs ...
	I0510 19:21:30.906205  449945 certs.go:226] acquiring lock for ca certs: {Name:mk8db74782205da4ac57ef815dd495cda255251a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:21:30.906395  449945 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20720-388787/.minikube/ca.key
	I0510 19:21:30.906455  449945 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.key
	I0510 19:21:30.906471  449945 certs.go:256] generating profile certs ...
	I0510 19:21:30.906547  449945 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/client.key
	I0510 19:21:30.906572  449945 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/client.crt with IP's: []
	I0510 19:21:31.144171  449945 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/client.crt ...
	I0510 19:21:31.144215  449945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/client.crt: {Name:mkd3e44a31effbc9de9caa7169df2c977a6fcf87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:21:31.144455  449945 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/client.key ...
	I0510 19:21:31.144484  449945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/client.key: {Name:mk41b411364bd6e1d56b6f1220ad140a9dc65173 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:21:31.144650  449945 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/apiserver.key.3362ca92
	I0510 19:21:31.144678  449945 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/apiserver.crt.3362ca92 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.225]
	I0510 19:21:31.492681  449945 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/apiserver.crt.3362ca92 ...
	I0510 19:21:31.492735  449945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/apiserver.crt.3362ca92: {Name:mk20d72bbb716bbda62045df3beaef639ca6f90e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:21:31.492988  449945 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/apiserver.key.3362ca92 ...
	I0510 19:21:31.493020  449945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/apiserver.key.3362ca92: {Name:mkc13f178a94fa7a2b105ca5208ae459ae06ddf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:21:31.493160  449945 certs.go:381] copying /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/apiserver.crt.3362ca92 -> /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/apiserver.crt
	I0510 19:21:31.493295  449945 certs.go:385] copying /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/apiserver.key.3362ca92 -> /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/apiserver.key
	I0510 19:21:31.493407  449945 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/proxy-client.key
	I0510 19:21:31.493432  449945 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/proxy-client.crt with IP's: []
	I0510 19:21:31.895522  449945 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/proxy-client.crt ...
	I0510 19:21:31.895554  449945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/proxy-client.crt: {Name:mk951aaec3f68066e7d9e622df9f1a7e4bed97f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:21:31.895770  449945 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/proxy-client.key ...
	I0510 19:21:31.895795  449945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/proxy-client.key: {Name:mk68e23e19629d4227adc67645c210885ec95725 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:21:31.896027  449945 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980.pem (1338 bytes)
	W0510 19:21:31.896081  449945 certs.go:480] ignoring /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980_empty.pem, impossibly tiny 0 bytes
	I0510 19:21:31.896096  449945 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem (1679 bytes)
	I0510 19:21:31.896126  449945 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem (1078 bytes)
	I0510 19:21:31.896157  449945 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem (1123 bytes)
	I0510 19:21:31.896195  449945 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem (1675 bytes)
	I0510 19:21:31.896249  449945 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem (1708 bytes)
	I0510 19:21:31.896908  449945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0510 19:21:31.960562  449945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0510 19:21:32.005812  449945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0510 19:21:32.041443  449945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0510 19:21:32.078545  449945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0510 19:21:32.112579  449945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0510 19:21:32.145337  449945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0510 19:21:32.177795  449945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0510 19:21:32.214358  449945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem --> /usr/share/ca-certificates/3959802.pem (1708 bytes)
	I0510 19:21:32.253230  449945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0510 19:21:32.289334  449945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980.pem --> /usr/share/ca-certificates/395980.pem (1338 bytes)
	I0510 19:21:32.326391  449945 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0510 19:21:32.350523  449945 ssh_runner.go:195] Run: openssl version
	I0510 19:21:32.357910  449945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0510 19:21:32.372460  449945 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0510 19:21:32.378253  449945 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 10 17:52 /usr/share/ca-certificates/minikubeCA.pem
	I0510 19:21:32.378322  449945 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0510 19:21:32.386076  449945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0510 19:21:32.399982  449945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/395980.pem && ln -fs /usr/share/ca-certificates/395980.pem /etc/ssl/certs/395980.pem"
	I0510 19:21:32.415384  449945 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/395980.pem
	I0510 19:21:32.421094  449945 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 10 18:00 /usr/share/ca-certificates/395980.pem
	I0510 19:21:32.421162  449945 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/395980.pem
	I0510 19:21:32.429738  449945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/395980.pem /etc/ssl/certs/51391683.0"
	I0510 19:21:32.444783  449945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3959802.pem && ln -fs /usr/share/ca-certificates/3959802.pem /etc/ssl/certs/3959802.pem"
	I0510 19:21:32.459789  449945 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3959802.pem
	I0510 19:21:32.465379  449945 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 10 18:00 /usr/share/ca-certificates/3959802.pem
	I0510 19:21:32.465467  449945 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3959802.pem
	I0510 19:21:32.473184  449945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3959802.pem /etc/ssl/certs/3ec20f2e.0"
	I0510 19:21:32.486777  449945 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0510 19:21:32.493175  449945 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0510 19:21:32.493235  449945 kubeadm.go:392] StartCluster: {Name:old-k8s-version-089147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-089147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.225 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 19:21:32.493338  449945 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0510 19:21:32.493404  449945 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0510 19:21:32.539454  449945 cri.go:89] found id: ""
	I0510 19:21:32.539542  449945 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0510 19:21:32.553528  449945 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0510 19:21:32.566073  449945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0510 19:21:32.581581  449945 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0510 19:21:32.581605  449945 kubeadm.go:157] found existing configuration files:
	
	I0510 19:21:32.581712  449945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0510 19:21:32.601258  449945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0510 19:21:32.601333  449945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0510 19:21:32.618027  449945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0510 19:21:32.638107  449945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0510 19:21:32.638195  449945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0510 19:21:32.663338  449945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0510 19:21:32.680737  449945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0510 19:21:32.680795  449945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0510 19:21:32.697673  449945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0510 19:21:32.716656  449945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0510 19:21:32.716756  449945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0510 19:21:32.734393  449945 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0510 19:21:33.037107  449945 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0510 19:21:33.037410  449945 kubeadm.go:310] [preflight] Running pre-flight checks
	I0510 19:21:33.226453  449945 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0510 19:21:33.226602  449945 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0510 19:21:33.226737  449945 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0510 19:21:33.467696  449945 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0510 19:21:33.471653  449945 out.go:235]   - Generating certificates and keys ...
	I0510 19:21:33.471790  449945 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0510 19:21:33.471886  449945 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0510 19:21:33.680409  449945 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0510 19:21:34.106643  449945 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0510 19:21:34.201256  449945 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0510 19:21:34.540153  449945 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0510 19:21:34.863830  449945 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0510 19:21:34.864024  449945 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-089147] and IPs [192.168.50.225 127.0.0.1 ::1]
	I0510 19:21:35.312169  449945 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0510 19:21:35.312486  449945 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-089147] and IPs [192.168.50.225 127.0.0.1 ::1]
	I0510 19:21:35.509676  449945 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0510 19:21:35.700092  449945 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0510 19:21:36.076176  449945 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0510 19:21:36.076444  449945 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0510 19:21:36.446354  449945 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0510 19:21:36.687450  449945 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0510 19:21:36.861085  449945 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0510 19:21:36.999215  449945 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0510 19:21:37.020519  449945 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0510 19:21:37.021033  449945 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0510 19:21:37.021096  449945 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0510 19:21:37.245762  449945 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0510 19:21:37.248397  449945 out.go:235]   - Booting up control plane ...
	I0510 19:21:37.248549  449945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0510 19:21:37.292883  449945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0510 19:21:37.296768  449945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0510 19:21:37.296985  449945 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0510 19:21:37.304216  449945 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0510 19:22:17.304037  449945 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0510 19:22:17.304860  449945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:22:17.305123  449945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:22:22.306255  449945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:22:22.306510  449945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:22:32.308387  449945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:22:32.308676  449945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:22:52.309621  449945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:22:52.309838  449945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:23:32.310291  449945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:23:32.310610  449945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:23:32.310652  449945 kubeadm.go:310] 
	I0510 19:23:32.310711  449945 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0510 19:23:32.310762  449945 kubeadm.go:310] 		timed out waiting for the condition
	I0510 19:23:32.310772  449945 kubeadm.go:310] 
	I0510 19:23:32.310823  449945 kubeadm.go:310] 	This error is likely caused by:
	I0510 19:23:32.310868  449945 kubeadm.go:310] 		- The kubelet is not running
	I0510 19:23:32.311011  449945 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0510 19:23:32.311019  449945 kubeadm.go:310] 
	I0510 19:23:32.311156  449945 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0510 19:23:32.311206  449945 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0510 19:23:32.311297  449945 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0510 19:23:32.311308  449945 kubeadm.go:310] 
	I0510 19:23:32.311524  449945 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0510 19:23:32.311658  449945 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0510 19:23:32.311667  449945 kubeadm.go:310] 
	I0510 19:23:32.311815  449945 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0510 19:23:32.311977  449945 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0510 19:23:32.312104  449945 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0510 19:23:32.312242  449945 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0510 19:23:32.312258  449945 kubeadm.go:310] 
	I0510 19:23:32.314070  449945 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0510 19:23:32.314238  449945 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0510 19:23:32.314355  449945 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0510 19:23:32.314522  449945 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-089147] and IPs [192.168.50.225 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-089147] and IPs [192.168.50.225 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-089147] and IPs [192.168.50.225 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-089147] and IPs [192.168.50.225 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0510 19:23:32.314566  449945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0510 19:23:35.682672  449945 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.368080923s)
	I0510 19:23:35.682754  449945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0510 19:23:35.699453  449945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0510 19:23:35.711689  449945 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0510 19:23:35.711714  449945 kubeadm.go:157] found existing configuration files:
	
	I0510 19:23:35.711775  449945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0510 19:23:35.722998  449945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0510 19:23:35.723076  449945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0510 19:23:35.735160  449945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0510 19:23:35.746207  449945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0510 19:23:35.746282  449945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0510 19:23:35.758236  449945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0510 19:23:35.768951  449945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0510 19:23:35.769025  449945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0510 19:23:35.780887  449945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0510 19:23:35.791499  449945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0510 19:23:35.791569  449945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0510 19:23:35.803997  449945 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0510 19:23:36.041550  449945 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0510 19:25:32.426177  449945 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0510 19:25:32.426313  449945 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0510 19:25:32.429352  449945 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0510 19:25:32.429461  449945 kubeadm.go:310] [preflight] Running pre-flight checks
	I0510 19:25:32.429647  449945 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0510 19:25:32.429777  449945 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0510 19:25:32.429911  449945 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0510 19:25:32.429996  449945 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0510 19:25:32.432164  449945 out.go:235]   - Generating certificates and keys ...
	I0510 19:25:32.432271  449945 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0510 19:25:32.432347  449945 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0510 19:25:32.432460  449945 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0510 19:25:32.432533  449945 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0510 19:25:32.432670  449945 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0510 19:25:32.432742  449945 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0510 19:25:32.432869  449945 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0510 19:25:32.432988  449945 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0510 19:25:32.433096  449945 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0510 19:25:32.433191  449945 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0510 19:25:32.433249  449945 kubeadm.go:310] [certs] Using the existing "sa" key
	I0510 19:25:32.433356  449945 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0510 19:25:32.433461  449945 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0510 19:25:32.433556  449945 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0510 19:25:32.433631  449945 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0510 19:25:32.433700  449945 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0510 19:25:32.433824  449945 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0510 19:25:32.433961  449945 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0510 19:25:32.434037  449945 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0510 19:25:32.434127  449945 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0510 19:25:32.436436  449945 out.go:235]   - Booting up control plane ...
	I0510 19:25:32.436563  449945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0510 19:25:32.436715  449945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0510 19:25:32.436816  449945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0510 19:25:32.436934  449945 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0510 19:25:32.437147  449945 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0510 19:25:32.437242  449945 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0510 19:25:32.437359  449945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:25:32.437631  449945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:25:32.437738  449945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:25:32.438015  449945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:25:32.438101  449945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:25:32.438358  449945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:25:32.438464  449945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:25:32.438666  449945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:25:32.438752  449945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:25:32.438936  449945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:25:32.438943  449945 kubeadm.go:310] 
	I0510 19:25:32.438991  449945 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0510 19:25:32.439046  449945 kubeadm.go:310] 		timed out waiting for the condition
	I0510 19:25:32.439056  449945 kubeadm.go:310] 
	I0510 19:25:32.439106  449945 kubeadm.go:310] 	This error is likely caused by:
	I0510 19:25:32.439163  449945 kubeadm.go:310] 		- The kubelet is not running
	I0510 19:25:32.439358  449945 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0510 19:25:32.439379  449945 kubeadm.go:310] 
	I0510 19:25:32.439535  449945 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0510 19:25:32.439583  449945 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0510 19:25:32.439638  449945 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0510 19:25:32.439646  449945 kubeadm.go:310] 
	I0510 19:25:32.439785  449945 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0510 19:25:32.439911  449945 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0510 19:25:32.439923  449945 kubeadm.go:310] 
	I0510 19:25:32.440074  449945 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0510 19:25:32.440216  449945 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0510 19:25:32.440331  449945 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0510 19:25:32.440439  449945 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0510 19:25:32.440512  449945 kubeadm.go:310] 
	I0510 19:25:32.440523  449945 kubeadm.go:394] duration metric: took 3m59.947291468s to StartCluster
	I0510 19:25:32.440568  449945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:25:32.440650  449945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:25:32.513353  449945 cri.go:89] found id: ""
	I0510 19:25:32.513392  449945 logs.go:282] 0 containers: []
	W0510 19:25:32.513405  449945 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:25:32.513414  449945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:25:32.513546  449945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:25:32.562601  449945 cri.go:89] found id: ""
	I0510 19:25:32.562633  449945 logs.go:282] 0 containers: []
	W0510 19:25:32.562644  449945 logs.go:284] No container was found matching "etcd"
	I0510 19:25:32.562651  449945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:25:32.562747  449945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:25:32.612454  449945 cri.go:89] found id: ""
	I0510 19:25:32.612490  449945 logs.go:282] 0 containers: []
	W0510 19:25:32.612501  449945 logs.go:284] No container was found matching "coredns"
	I0510 19:25:32.612512  449945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:25:32.612596  449945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:25:32.668741  449945 cri.go:89] found id: ""
	I0510 19:25:32.668785  449945 logs.go:282] 0 containers: []
	W0510 19:25:32.668801  449945 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:25:32.668809  449945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:25:32.668876  449945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:25:32.722750  449945 cri.go:89] found id: ""
	I0510 19:25:32.722793  449945 logs.go:282] 0 containers: []
	W0510 19:25:32.722805  449945 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:25:32.722813  449945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:25:32.722887  449945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:25:32.771819  449945 cri.go:89] found id: ""
	I0510 19:25:32.771858  449945 logs.go:282] 0 containers: []
	W0510 19:25:32.771870  449945 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:25:32.771885  449945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:25:32.771955  449945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:25:32.815591  449945 cri.go:89] found id: ""
	I0510 19:25:32.815626  449945 logs.go:282] 0 containers: []
	W0510 19:25:32.815638  449945 logs.go:284] No container was found matching "kindnet"
	I0510 19:25:32.815654  449945 logs.go:123] Gathering logs for container status ...
	I0510 19:25:32.815671  449945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:25:32.873245  449945 logs.go:123] Gathering logs for kubelet ...
	I0510 19:25:32.873284  449945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:25:32.941510  449945 logs.go:123] Gathering logs for dmesg ...
	I0510 19:25:32.941558  449945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:25:32.961761  449945 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:25:32.961800  449945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:25:33.143092  449945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:25:33.143125  449945 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:25:33.143142  449945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0510 19:25:33.257247  449945 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0510 19:25:33.257335  449945 out.go:270] * 
	* 
	W0510 19:25:33.257412  449945 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0510 19:25:33.257431  449945 out.go:270] * 
	* 
	W0510 19:25:33.258239  449945 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0510 19:25:33.261499  449945 out.go:201] 
	W0510 19:25:33.263106  449945 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0510 19:25:33.263180  449945 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0510 19:25:33.263213  449945 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0510 19:25:33.265218  449945 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-089147 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-089147 -n old-k8s-version-089147
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-089147 -n old-k8s-version-089147: exit status 6 (262.382576ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0510 19:25:33.571300  457414 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-089147" does not appear in /home/jenkins/minikube-integration/20720-388787/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-089147" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (284.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-089147 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context old-k8s-version-089147 create -f testdata/busybox.yaml: exit status 1 (55.538322ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-089147" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context old-k8s-version-089147 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-089147 -n old-k8s-version-089147
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-089147 -n old-k8s-version-089147: exit status 6 (288.027558ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0510 19:25:33.919144  457452 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-089147" does not appear in /home/jenkins/minikube-integration/20720-388787/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-089147" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-089147 -n old-k8s-version-089147
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-089147 -n old-k8s-version-089147: exit status 6 (280.153289ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0510 19:25:34.194349  457481 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-089147" does not appear in /home/jenkins/minikube-integration/20720-388787/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-089147" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-089147 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0510 19:25:40.898850  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/custom-flannel-380533/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-089147 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m28.930799765s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-089147 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-089147 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-089147 describe deploy/metrics-server -n kube-system: exit status 1 (46.594738ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-089147" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-089147 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-089147 -n old-k8s-version-089147
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-089147 -n old-k8s-version-089147: exit status 6 (234.686036ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0510 19:27:03.412379  458925 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-089147" does not appear in /home/jenkins/minikube-integration/20720-388787/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-089147" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (510.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-089147 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0510 19:27:09.239475  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/calico-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:27:15.382219  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/bridge-380533/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-089147 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (8m27.788006687s)

                                                
                                                
-- stdout --
	* [old-k8s-version-089147] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20720
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20720-388787/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-388787/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.33.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.33.0
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-089147" primary control-plane node in "old-k8s-version-089147" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-089147" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0510 19:27:08.993093  459056 out.go:345] Setting OutFile to fd 1 ...
	I0510 19:27:08.993216  459056 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 19:27:08.993225  459056 out.go:358] Setting ErrFile to fd 2...
	I0510 19:27:08.993231  459056 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 19:27:08.993455  459056 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-388787/.minikube/bin
	I0510 19:27:08.994063  459056 out.go:352] Setting JSON to false
	I0510 19:27:08.995114  459056 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":32977,"bootTime":1746872252,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 19:27:08.995228  459056 start.go:140] virtualization: kvm guest
	I0510 19:27:08.997583  459056 out.go:177] * [old-k8s-version-089147] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 19:27:08.999200  459056 notify.go:220] Checking for updates...
	I0510 19:27:08.999223  459056 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 19:27:09.001300  459056 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 19:27:09.002705  459056 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-388787/kubeconfig
	I0510 19:27:09.004555  459056 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-388787/.minikube
	I0510 19:27:09.006175  459056 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 19:27:09.007749  459056 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 19:27:09.010112  459056 config.go:182] Loaded profile config "old-k8s-version-089147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0510 19:27:09.010756  459056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:27:09.010876  459056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:27:09.026680  459056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44745
	I0510 19:27:09.027213  459056 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:27:09.027815  459056 main.go:141] libmachine: Using API Version  1
	I0510 19:27:09.027838  459056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:27:09.028225  459056 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:27:09.028409  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .DriverName
	I0510 19:27:09.030709  459056 out.go:177] * Kubernetes 1.33.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.33.0
	I0510 19:27:09.032233  459056 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 19:27:09.032571  459056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:27:09.032615  459056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:27:09.048319  459056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35185
	I0510 19:27:09.048960  459056 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:27:09.049585  459056 main.go:141] libmachine: Using API Version  1
	I0510 19:27:09.049611  459056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:27:09.049969  459056 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:27:09.050201  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .DriverName
	I0510 19:27:09.089632  459056 out.go:177] * Using the kvm2 driver based on existing profile
	I0510 19:27:09.091282  459056 start.go:304] selected driver: kvm2
	I0510 19:27:09.091306  459056 start.go:908] validating driver "kvm2" against &{Name:old-k8s-version-089147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-089147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.225 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 19:27:09.091458  459056 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 19:27:09.092237  459056 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 19:27:09.092360  459056 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20720-388787/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0510 19:27:09.108588  459056 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0510 19:27:09.109080  459056 start_flags.go:975] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0510 19:27:09.109119  459056 cni.go:84] Creating CNI manager for ""
	I0510 19:27:09.109179  459056 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0510 19:27:09.109221  459056 start.go:347] cluster config:
	{Name:old-k8s-version-089147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-089147 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.225 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 19:27:09.109358  459056 iso.go:125] acquiring lock: {Name:mk19640015999219180c6685480547adf0c02201 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 19:27:09.111734  459056 out.go:177] * Starting "old-k8s-version-089147" primary control-plane node in "old-k8s-version-089147" cluster
	I0510 19:27:09.113178  459056 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0510 19:27:09.113271  459056 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0510 19:27:09.113289  459056 cache.go:56] Caching tarball of preloaded images
	I0510 19:27:09.113402  459056 preload.go:172] Found /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0510 19:27:09.113415  459056 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0510 19:27:09.113541  459056 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/config.json ...
	I0510 19:27:09.113795  459056 start.go:360] acquireMachinesLock for old-k8s-version-089147: {Name:mk11499d7756d503a7a24339ad1a7f9ab9dc0fab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0510 19:27:09.113855  459056 start.go:364] duration metric: took 35.73µs to acquireMachinesLock for "old-k8s-version-089147"
	I0510 19:27:09.113877  459056 start.go:96] Skipping create...Using existing machine configuration
	I0510 19:27:09.113885  459056 fix.go:54] fixHost starting: 
	I0510 19:27:09.114206  459056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:27:09.114247  459056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:27:09.129717  459056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38895
	I0510 19:27:09.130186  459056 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:27:09.130725  459056 main.go:141] libmachine: Using API Version  1
	I0510 19:27:09.130747  459056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:27:09.131069  459056 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:27:09.131249  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .DriverName
	I0510 19:27:09.131362  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetState
	I0510 19:27:09.133066  459056 fix.go:112] recreateIfNeeded on old-k8s-version-089147: state=Stopped err=<nil>
	I0510 19:27:09.133092  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .DriverName
	W0510 19:27:09.133257  459056 fix.go:138] unexpected machine state, will restart: <nil>
	I0510 19:27:09.136240  459056 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-089147" ...
	I0510 19:27:09.137434  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .Start
	I0510 19:27:09.137658  459056 main.go:141] libmachine: (old-k8s-version-089147) starting domain...
	I0510 19:27:09.137679  459056 main.go:141] libmachine: (old-k8s-version-089147) ensuring networks are active...
	I0510 19:27:09.138470  459056 main.go:141] libmachine: (old-k8s-version-089147) Ensuring network default is active
	I0510 19:27:09.138858  459056 main.go:141] libmachine: (old-k8s-version-089147) Ensuring network mk-old-k8s-version-089147 is active
	I0510 19:27:09.139220  459056 main.go:141] libmachine: (old-k8s-version-089147) getting domain XML...
	I0510 19:27:09.140036  459056 main.go:141] libmachine: (old-k8s-version-089147) creating domain...
	I0510 19:27:10.416574  459056 main.go:141] libmachine: (old-k8s-version-089147) waiting for IP...
	I0510 19:27:10.417359  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:10.417963  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | unable to find current IP address of domain old-k8s-version-089147 in network mk-old-k8s-version-089147
	I0510 19:27:10.418088  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:27:10.417969  459091 retry.go:31] will retry after 246.040392ms: waiting for domain to come up
	I0510 19:27:10.665405  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:10.666062  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | unable to find current IP address of domain old-k8s-version-089147 in network mk-old-k8s-version-089147
	I0510 19:27:10.666147  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:27:10.666033  459091 retry.go:31] will retry after 240.7184ms: waiting for domain to come up
	I0510 19:27:10.908830  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:10.909330  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | unable to find current IP address of domain old-k8s-version-089147 in network mk-old-k8s-version-089147
	I0510 19:27:10.909357  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:27:10.909297  459091 retry.go:31] will retry after 484.313558ms: waiting for domain to come up
	I0510 19:27:11.394971  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:11.395576  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | unable to find current IP address of domain old-k8s-version-089147 in network mk-old-k8s-version-089147
	I0510 19:27:11.395612  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:27:11.395544  459091 retry.go:31] will retry after 582.040906ms: waiting for domain to come up
	I0510 19:27:11.979556  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:11.980107  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | unable to find current IP address of domain old-k8s-version-089147 in network mk-old-k8s-version-089147
	I0510 19:27:11.980176  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:27:11.980070  459091 retry.go:31] will retry after 592.787912ms: waiting for domain to come up
	I0510 19:27:12.575017  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:12.575706  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | unable to find current IP address of domain old-k8s-version-089147 in network mk-old-k8s-version-089147
	I0510 19:27:12.575753  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:27:12.575646  459091 retry.go:31] will retry after 692.181133ms: waiting for domain to come up
	I0510 19:27:13.269650  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:13.270229  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | unable to find current IP address of domain old-k8s-version-089147 in network mk-old-k8s-version-089147
	I0510 19:27:13.270283  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:27:13.270194  459091 retry.go:31] will retry after 771.45232ms: waiting for domain to come up
	I0510 19:27:14.043581  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:14.044281  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | unable to find current IP address of domain old-k8s-version-089147 in network mk-old-k8s-version-089147
	I0510 19:27:14.044308  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:27:14.044221  459091 retry.go:31] will retry after 1.344449521s: waiting for domain to come up
	I0510 19:27:15.390956  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:15.391465  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | unable to find current IP address of domain old-k8s-version-089147 in network mk-old-k8s-version-089147
	I0510 19:27:15.391513  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:27:15.391451  459091 retry.go:31] will retry after 1.487631951s: waiting for domain to come up
	I0510 19:27:16.881271  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:16.881850  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | unable to find current IP address of domain old-k8s-version-089147 in network mk-old-k8s-version-089147
	I0510 19:27:16.881878  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:27:16.881797  459091 retry.go:31] will retry after 1.423780279s: waiting for domain to come up
	I0510 19:27:18.307559  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:18.308156  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | unable to find current IP address of domain old-k8s-version-089147 in network mk-old-k8s-version-089147
	I0510 19:27:18.308206  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:27:18.308154  459091 retry.go:31] will retry after 1.972990441s: waiting for domain to come up
	I0510 19:27:20.282773  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:20.283336  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | unable to find current IP address of domain old-k8s-version-089147 in network mk-old-k8s-version-089147
	I0510 19:27:20.283406  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:27:20.283343  459091 retry.go:31] will retry after 3.189593727s: waiting for domain to come up
	I0510 19:27:23.618741  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:23.619115  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | unable to find current IP address of domain old-k8s-version-089147 in network mk-old-k8s-version-089147
	I0510 19:27:23.619143  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:27:23.619075  459091 retry.go:31] will retry after 3.237680008s: waiting for domain to come up
	I0510 19:27:26.860579  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:26.861169  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has current primary IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:26.861235  459056 main.go:141] libmachine: (old-k8s-version-089147) found domain IP: 192.168.50.225
	I0510 19:27:26.861263  459056 main.go:141] libmachine: (old-k8s-version-089147) reserving static IP address...
	I0510 19:27:26.861678  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "old-k8s-version-089147", mac: "52:54:00:c5:c6:86", ip: "192.168.50.225"} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:26.861748  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | skip adding static IP to network mk-old-k8s-version-089147 - found existing host DHCP lease matching {name: "old-k8s-version-089147", mac: "52:54:00:c5:c6:86", ip: "192.168.50.225"}
	I0510 19:27:26.861769  459056 main.go:141] libmachine: (old-k8s-version-089147) reserved static IP address 192.168.50.225 for domain old-k8s-version-089147
	I0510 19:27:26.861785  459056 main.go:141] libmachine: (old-k8s-version-089147) waiting for SSH...
	I0510 19:27:26.861791  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | Getting to WaitForSSH function...
	I0510 19:27:26.863716  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:26.864074  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:26.864105  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:26.864224  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | Using SSH client type: external
	I0510 19:27:26.864249  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | Using SSH private key: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/old-k8s-version-089147/id_rsa (-rw-------)
	I0510 19:27:26.864275  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.225 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20720-388787/.minikube/machines/old-k8s-version-089147/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0510 19:27:26.864284  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | About to run SSH command:
	I0510 19:27:26.864292  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | exit 0
	I0510 19:27:26.992149  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | SSH cmd err, output: <nil>: 
	I0510 19:27:26.992596  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetConfigRaw
	I0510 19:27:26.993291  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetIP
	I0510 19:27:26.996245  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:26.996734  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:26.996760  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:26.996987  459056 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/config.json ...
	I0510 19:27:26.997231  459056 machine.go:93] provisionDockerMachine start ...
	I0510 19:27:26.997257  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .DriverName
	I0510 19:27:26.997484  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:26.999968  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.000439  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:27.000476  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.000707  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:27:27.000924  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:27.001051  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:27.001195  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:27:27.001309  459056 main.go:141] libmachine: Using SSH client type: native
	I0510 19:27:27.001588  459056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.225 22 <nil> <nil>}
	I0510 19:27:27.001603  459056 main.go:141] libmachine: About to run SSH command:
	hostname
	I0510 19:27:27.120348  459056 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0510 19:27:27.120385  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetMachineName
	I0510 19:27:27.120685  459056 buildroot.go:166] provisioning hostname "old-k8s-version-089147"
	I0510 19:27:27.120712  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetMachineName
	I0510 19:27:27.120937  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:27.123906  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.124166  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:27.124192  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.124346  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:27:27.124515  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:27.124641  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:27.124770  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:27:27.124903  459056 main.go:141] libmachine: Using SSH client type: native
	I0510 19:27:27.125130  459056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.225 22 <nil> <nil>}
	I0510 19:27:27.125146  459056 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-089147 && echo "old-k8s-version-089147" | sudo tee /etc/hostname
	I0510 19:27:27.254277  459056 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-089147
	
	I0510 19:27:27.254306  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:27.257358  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.257763  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:27.257793  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.258010  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:27:27.258221  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:27.258392  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:27.258550  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:27:27.258746  459056 main.go:141] libmachine: Using SSH client type: native
	I0510 19:27:27.258987  459056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.225 22 <nil> <nil>}
	I0510 19:27:27.259004  459056 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-089147' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-089147/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-089147' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0510 19:27:27.383141  459056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0510 19:27:27.383177  459056 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20720-388787/.minikube CaCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20720-388787/.minikube}
	I0510 19:27:27.383245  459056 buildroot.go:174] setting up certificates
	I0510 19:27:27.383268  459056 provision.go:84] configureAuth start
	I0510 19:27:27.383282  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetMachineName
	I0510 19:27:27.383632  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetIP
	I0510 19:27:27.386412  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.386733  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:27.386760  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.386920  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:27.388990  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.389308  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:27.389346  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.389489  459056 provision.go:143] copyHostCerts
	I0510 19:27:27.389586  459056 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem, removing ...
	I0510 19:27:27.389611  459056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem
	I0510 19:27:27.389674  459056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem (1675 bytes)
	I0510 19:27:27.389763  459056 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem, removing ...
	I0510 19:27:27.389771  459056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem
	I0510 19:27:27.389797  459056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem (1078 bytes)
	I0510 19:27:27.389845  459056 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem, removing ...
	I0510 19:27:27.389852  459056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem
	I0510 19:27:27.389873  459056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem (1123 bytes)
	I0510 19:27:27.389917  459056 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-089147 san=[127.0.0.1 192.168.50.225 localhost minikube old-k8s-version-089147]
	I0510 19:27:27.706220  459056 provision.go:177] copyRemoteCerts
	I0510 19:27:27.706291  459056 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0510 19:27:27.706321  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:27.709279  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.709662  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:27.709704  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.709901  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:27:27.710147  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:27.710312  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:27:27.710453  459056 sshutil.go:53] new ssh client: &{IP:192.168.50.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/old-k8s-version-089147/id_rsa Username:docker}
	I0510 19:27:27.796192  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0510 19:27:27.826223  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0510 19:27:27.856165  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0510 19:27:27.885803  459056 provision.go:87] duration metric: took 502.517549ms to configureAuth
	I0510 19:27:27.885844  459056 buildroot.go:189] setting minikube options for container-runtime
	I0510 19:27:27.886049  459056 config.go:182] Loaded profile config "old-k8s-version-089147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0510 19:27:27.886126  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:27.888892  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.889274  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:27.889304  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.889432  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:27:27.889662  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:27.889842  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:27.890001  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:27:27.890137  459056 main.go:141] libmachine: Using SSH client type: native
	I0510 19:27:27.890398  459056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.225 22 <nil> <nil>}
	I0510 19:27:27.890414  459056 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0510 19:27:28.145754  459056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0510 19:27:28.145780  459056 machine.go:96] duration metric: took 1.148533327s to provisionDockerMachine
	I0510 19:27:28.145793  459056 start.go:293] postStartSetup for "old-k8s-version-089147" (driver="kvm2")
	I0510 19:27:28.145805  459056 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0510 19:27:28.145843  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .DriverName
	I0510 19:27:28.146213  459056 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0510 19:27:28.146241  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:28.148935  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.149310  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:28.149338  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.149442  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:27:28.149630  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:28.149794  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:27:28.149969  459056 sshutil.go:53] new ssh client: &{IP:192.168.50.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/old-k8s-version-089147/id_rsa Username:docker}
	I0510 19:27:28.237429  459056 ssh_runner.go:195] Run: cat /etc/os-release
	I0510 19:27:28.242504  459056 info.go:137] Remote host: Buildroot 2024.11.2
	I0510 19:27:28.242535  459056 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-388787/.minikube/addons for local assets ...
	I0510 19:27:28.242600  459056 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-388787/.minikube/files for local assets ...
	I0510 19:27:28.242694  459056 filesync.go:149] local asset: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem -> 3959802.pem in /etc/ssl/certs
	I0510 19:27:28.242795  459056 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0510 19:27:28.255581  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem --> /etc/ssl/certs/3959802.pem (1708 bytes)
	I0510 19:27:28.285383  459056 start.go:296] duration metric: took 139.572888ms for postStartSetup
	I0510 19:27:28.285430  459056 fix.go:56] duration metric: took 19.171545731s for fixHost
	I0510 19:27:28.285452  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:28.288861  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.289256  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:28.289288  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.289472  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:27:28.289747  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:28.289968  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:28.290122  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:27:28.290275  459056 main.go:141] libmachine: Using SSH client type: native
	I0510 19:27:28.290504  459056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.225 22 <nil> <nil>}
	I0510 19:27:28.290514  459056 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0510 19:27:28.400790  459056 main.go:141] libmachine: SSH cmd err, output: <nil>: 1746905248.354737003
	
	I0510 19:27:28.400820  459056 fix.go:216] guest clock: 1746905248.354737003
	I0510 19:27:28.400830  459056 fix.go:229] Guest: 2025-05-10 19:27:28.354737003 +0000 UTC Remote: 2025-05-10 19:27:28.285433906 +0000 UTC m=+19.332417949 (delta=69.303097ms)
	I0510 19:27:28.400874  459056 fix.go:200] guest clock delta is within tolerance: 69.303097ms
	I0510 19:27:28.400901  459056 start.go:83] releasing machines lock for "old-k8s-version-089147", held for 19.287012994s
	I0510 19:27:28.400943  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .DriverName
	I0510 19:27:28.401246  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetIP
	I0510 19:27:28.404469  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.404985  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:28.405012  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.405227  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .DriverName
	I0510 19:27:28.405870  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .DriverName
	I0510 19:27:28.406067  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .DriverName
	I0510 19:27:28.406182  459056 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0510 19:27:28.406225  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:28.406371  459056 ssh_runner.go:195] Run: cat /version.json
	I0510 19:27:28.406414  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:28.409133  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.409451  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.409485  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:28.409508  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.409700  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:27:28.409895  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:28.409939  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:28.409971  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.410074  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:27:28.410144  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:27:28.410238  459056 sshutil.go:53] new ssh client: &{IP:192.168.50.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/old-k8s-version-089147/id_rsa Username:docker}
	I0510 19:27:28.410313  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:28.410431  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:27:28.410556  459056 sshutil.go:53] new ssh client: &{IP:192.168.50.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/old-k8s-version-089147/id_rsa Username:docker}
	I0510 19:27:28.522881  459056 ssh_runner.go:195] Run: systemctl --version
	I0510 19:27:28.529679  459056 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0510 19:27:28.679208  459056 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0510 19:27:28.686449  459056 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0510 19:27:28.686542  459056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0510 19:27:28.706391  459056 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0510 19:27:28.706422  459056 start.go:495] detecting cgroup driver to use...
	I0510 19:27:28.706502  459056 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0510 19:27:28.725500  459056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0510 19:27:28.743141  459056 docker.go:225] disabling cri-docker service (if available) ...
	I0510 19:27:28.743218  459056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0510 19:27:28.763489  459056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0510 19:27:28.782362  459056 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0510 19:27:28.930849  459056 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0510 19:27:29.145684  459056 docker.go:241] disabling docker service ...
	I0510 19:27:29.145777  459056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0510 19:27:29.162572  459056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0510 19:27:29.177892  459056 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0510 19:27:29.337238  459056 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0510 19:27:29.498230  459056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0510 19:27:29.515221  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0510 19:27:29.539326  459056 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0510 19:27:29.539400  459056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:27:29.551931  459056 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0510 19:27:29.552027  459056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:27:29.563727  459056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:27:29.576495  459056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:27:29.589274  459056 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0510 19:27:29.602567  459056 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0510 19:27:29.613569  459056 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0510 19:27:29.613666  459056 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0510 19:27:29.631475  459056 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0510 19:27:29.646992  459056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 19:27:29.783415  459056 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0510 19:27:29.908799  459056 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0510 19:27:29.908871  459056 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0510 19:27:29.916611  459056 start.go:563] Will wait 60s for crictl version
	I0510 19:27:29.916678  459056 ssh_runner.go:195] Run: which crictl
	I0510 19:27:29.922342  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0510 19:27:29.970957  459056 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0510 19:27:29.971075  459056 ssh_runner.go:195] Run: crio --version
	I0510 19:27:30.013260  459056 ssh_runner.go:195] Run: crio --version
	I0510 19:27:30.045551  459056 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0510 19:27:30.046696  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetIP
	I0510 19:27:30.049916  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:30.050298  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:30.050343  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:30.050593  459056 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0510 19:27:30.055795  459056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 19:27:30.072862  459056 kubeadm.go:875] updating cluster {Name:old-k8s-version-089147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-089147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.225 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0510 19:27:30.073023  459056 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0510 19:27:30.073092  459056 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 19:27:30.136655  459056 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0510 19:27:30.136733  459056 ssh_runner.go:195] Run: which lz4
	I0510 19:27:30.141756  459056 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0510 19:27:30.146784  459056 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0510 19:27:30.146832  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0510 19:27:32.084982  459056 crio.go:462] duration metric: took 1.943253158s to copy over tarball
	I0510 19:27:32.085084  459056 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0510 19:27:34.680248  459056 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.595132142s)
	I0510 19:27:34.680275  459056 crio.go:469] duration metric: took 2.595258666s to extract the tarball
	I0510 19:27:34.680284  459056 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0510 19:27:34.725856  459056 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 19:27:34.769530  459056 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0510 19:27:34.769567  459056 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0510 19:27:34.769639  459056 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0510 19:27:34.769682  459056 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:27:34.769696  459056 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:27:34.769712  459056 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0510 19:27:34.769686  459056 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0510 19:27:34.769766  459056 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:27:34.769779  459056 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0510 19:27:34.769798  459056 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:27:34.771393  459056 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:27:34.771413  459056 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0510 19:27:34.771433  459056 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0510 19:27:34.771391  459056 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:27:34.771454  459056 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:27:34.771457  459056 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:27:34.771488  459056 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0510 19:27:34.771522  459056 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0510 19:27:34.903898  459056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:27:34.909532  459056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0510 19:27:34.909958  459056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:27:34.920714  459056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:27:34.927038  459056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0510 19:27:34.932543  459056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0510 19:27:34.939391  459056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:27:35.035164  459056 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0510 19:27:35.035225  459056 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:27:35.035308  459056 ssh_runner.go:195] Run: which crictl
	I0510 19:27:35.046705  459056 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0510 19:27:35.046773  459056 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0510 19:27:35.046831  459056 ssh_runner.go:195] Run: which crictl
	I0510 19:27:35.102600  459056 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0510 19:27:35.102657  459056 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:27:35.102728  459056 ssh_runner.go:195] Run: which crictl
	I0510 19:27:35.114127  459056 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0510 19:27:35.114197  459056 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:27:35.114220  459056 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0510 19:27:35.114255  459056 ssh_runner.go:195] Run: which crictl
	I0510 19:27:35.114262  459056 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0510 19:27:35.114305  459056 ssh_runner.go:195] Run: which crictl
	I0510 19:27:35.114526  459056 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0510 19:27:35.114562  459056 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0510 19:27:35.114596  459056 ssh_runner.go:195] Run: which crictl
	I0510 19:27:35.135454  459056 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0510 19:27:35.135500  459056 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:27:35.135549  459056 ssh_runner.go:195] Run: which crictl
	I0510 19:27:35.135570  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:27:35.135627  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0510 19:27:35.135673  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:27:35.135728  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0510 19:27:35.135753  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:27:35.135782  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0510 19:27:35.246929  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:27:35.246999  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0510 19:27:35.304129  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:27:35.304183  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:27:35.304193  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:27:35.304231  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0510 19:27:35.304278  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0510 19:27:35.381894  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0510 19:27:35.381939  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:27:35.482712  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0510 19:27:35.482788  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:27:35.482823  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:27:35.482858  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0510 19:27:35.482947  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:27:35.526146  459056 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0510 19:27:35.557215  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:27:35.649079  459056 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0510 19:27:35.649160  459056 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0510 19:27:35.649222  459056 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0510 19:27:35.649256  459056 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0510 19:27:35.649351  459056 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0510 19:27:35.667931  459056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0510 19:27:35.671336  459056 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0510 19:27:35.818843  459056 cache_images.go:92] duration metric: took 1.049254698s to LoadCachedImages
	W0510 19:27:35.818925  459056 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0510 19:27:35.818936  459056 kubeadm.go:926] updating node { 192.168.50.225 8443 v1.20.0 crio true true} ...
	I0510 19:27:35.819071  459056 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-089147 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-089147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0510 19:27:35.819178  459056 ssh_runner.go:195] Run: crio config
	I0510 19:27:35.871053  459056 cni.go:84] Creating CNI manager for ""
	I0510 19:27:35.871078  459056 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0510 19:27:35.871088  459056 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0510 19:27:35.871108  459056 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.225 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-089147 NodeName:old-k8s-version-089147 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.225"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.225 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0510 19:27:35.871325  459056 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.225
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-089147"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.225
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.225"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0510 19:27:35.871410  459056 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0510 19:27:35.884778  459056 binaries.go:44] Found k8s binaries, skipping transfer
	I0510 19:27:35.884850  459056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0510 19:27:35.897755  459056 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0510 19:27:35.920392  459056 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0510 19:27:35.944066  459056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0510 19:27:35.969513  459056 ssh_runner.go:195] Run: grep 192.168.50.225	control-plane.minikube.internal$ /etc/hosts
	I0510 19:27:35.973968  459056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.225	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 19:27:35.989113  459056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 19:27:36.126144  459056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 19:27:36.161368  459056 certs.go:68] Setting up /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147 for IP: 192.168.50.225
	I0510 19:27:36.161393  459056 certs.go:194] generating shared ca certs ...
	I0510 19:27:36.161414  459056 certs.go:226] acquiring lock for ca certs: {Name:mk8db74782205da4ac57ef815dd495cda255251a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:27:36.161602  459056 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20720-388787/.minikube/ca.key
	I0510 19:27:36.161660  459056 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.key
	I0510 19:27:36.161675  459056 certs.go:256] generating profile certs ...
	I0510 19:27:36.161815  459056 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/client.key
	I0510 19:27:36.161897  459056 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/apiserver.key.3362ca92
	I0510 19:27:36.161951  459056 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/proxy-client.key
	I0510 19:27:36.162093  459056 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980.pem (1338 bytes)
	W0510 19:27:36.162134  459056 certs.go:480] ignoring /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980_empty.pem, impossibly tiny 0 bytes
	I0510 19:27:36.162148  459056 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem (1679 bytes)
	I0510 19:27:36.162186  459056 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem (1078 bytes)
	I0510 19:27:36.162219  459056 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem (1123 bytes)
	I0510 19:27:36.162251  459056 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem (1675 bytes)
	I0510 19:27:36.162305  459056 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem (1708 bytes)
	I0510 19:27:36.163029  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0510 19:27:36.207434  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0510 19:27:36.254337  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0510 19:27:36.302029  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0510 19:27:36.340123  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0510 19:27:36.372457  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0510 19:27:36.417695  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0510 19:27:36.454687  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0510 19:27:36.491453  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0510 19:27:36.527708  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980.pem --> /usr/share/ca-certificates/395980.pem (1338 bytes)
	I0510 19:27:36.566188  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem --> /usr/share/ca-certificates/3959802.pem (1708 bytes)
	I0510 19:27:36.605695  459056 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0510 19:27:36.633416  459056 ssh_runner.go:195] Run: openssl version
	I0510 19:27:36.640812  459056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0510 19:27:36.655287  459056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0510 19:27:36.660996  459056 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 10 17:52 /usr/share/ca-certificates/minikubeCA.pem
	I0510 19:27:36.661078  459056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0510 19:27:36.671509  459056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0510 19:27:36.685341  459056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/395980.pem && ln -fs /usr/share/ca-certificates/395980.pem /etc/ssl/certs/395980.pem"
	I0510 19:27:36.701195  459056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/395980.pem
	I0510 19:27:36.707338  459056 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 10 18:00 /usr/share/ca-certificates/395980.pem
	I0510 19:27:36.707426  459056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/395980.pem
	I0510 19:27:36.715832  459056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/395980.pem /etc/ssl/certs/51391683.0"
	I0510 19:27:36.730499  459056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3959802.pem && ln -fs /usr/share/ca-certificates/3959802.pem /etc/ssl/certs/3959802.pem"
	I0510 19:27:36.745937  459056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3959802.pem
	I0510 19:27:36.753124  459056 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 10 18:00 /usr/share/ca-certificates/3959802.pem
	I0510 19:27:36.753219  459056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3959802.pem
	I0510 19:27:36.763162  459056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3959802.pem /etc/ssl/certs/3ec20f2e.0"
	I0510 19:27:36.777980  459056 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0510 19:27:36.784377  459056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0510 19:27:36.792871  459056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0510 19:27:36.801028  459056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0510 19:27:36.809570  459056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0510 19:27:36.820430  459056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0510 19:27:36.830234  459056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0510 19:27:36.838492  459056 kubeadm.go:392] StartCluster: {Name:old-k8s-version-089147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-089147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.225 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 19:27:36.838628  459056 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0510 19:27:36.838710  459056 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0510 19:27:36.883637  459056 cri.go:89] found id: ""
	I0510 19:27:36.883721  459056 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0510 19:27:36.898381  459056 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0510 19:27:36.898418  459056 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0510 19:27:36.898479  459056 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0510 19:27:36.911968  459056 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0510 19:27:36.912423  459056 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-089147" does not appear in /home/jenkins/minikube-integration/20720-388787/kubeconfig
	I0510 19:27:36.912622  459056 kubeconfig.go:62] /home/jenkins/minikube-integration/20720-388787/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-089147" cluster setting kubeconfig missing "old-k8s-version-089147" context setting]
	I0510 19:27:36.912933  459056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/kubeconfig: {Name:mk5ad7285fe4c17b2779ea6d5a539f101fe94797 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:27:36.978461  459056 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0510 19:27:36.992010  459056 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.50.225
	I0510 19:27:36.992058  459056 kubeadm.go:1152] stopping kube-system containers ...
	I0510 19:27:36.992090  459056 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0510 19:27:36.992157  459056 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0510 19:27:37.036332  459056 cri.go:89] found id: ""
	I0510 19:27:37.036417  459056 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0510 19:27:37.061304  459056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0510 19:27:37.077360  459056 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0510 19:27:37.077388  459056 kubeadm.go:157] found existing configuration files:
	
	I0510 19:27:37.077447  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0510 19:27:37.091136  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0510 19:27:37.091207  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0510 19:27:37.108190  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0510 19:27:37.122863  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0510 19:27:37.122925  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0510 19:27:37.135581  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0510 19:27:37.151096  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0510 19:27:37.151176  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0510 19:27:37.163976  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0510 19:27:37.176297  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0510 19:27:37.176382  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0510 19:27:37.189484  459056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0510 19:27:37.202907  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:27:37.370636  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:27:38.101468  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:27:38.357025  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:27:38.472109  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:27:38.566036  459056 api_server.go:52] waiting for apiserver process to appear ...
	I0510 19:27:38.566163  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:39.066944  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:39.566854  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:40.067066  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:40.567198  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:41.066452  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:41.566381  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:42.066951  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:42.567170  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:43.067308  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:43.566541  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:44.067005  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:44.566869  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:45.066432  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:45.567107  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:46.066205  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:46.566600  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:47.066806  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:47.567316  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:48.067123  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:48.566636  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:49.067037  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:49.566942  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:50.066669  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:50.566620  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:51.066533  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:51.567303  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:52.066558  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:52.567193  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:53.066234  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:53.567160  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:54.066832  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:54.567225  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:55.067095  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:55.567141  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:56.066981  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:56.566711  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:57.066205  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:57.566404  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:58.067102  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:58.566428  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:59.066475  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:59.567069  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:00.066988  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:00.566888  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:01.066769  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:01.566741  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:02.066555  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:02.566338  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:03.066492  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:03.567302  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:04.066752  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:04.567029  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:05.066242  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:05.567101  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:06.066378  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:06.566985  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:07.066671  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:07.566514  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:08.067086  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:08.566885  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:09.066763  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:09.566992  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:10.066908  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:10.566843  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:11.066514  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:11.566388  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:12.066218  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:12.566934  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:13.066645  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:13.567085  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:14.066994  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:14.567064  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:15.066411  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:15.567220  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:16.067320  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:16.566859  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:17.066625  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:17.566521  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:18.066671  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:18.566592  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:19.066253  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:19.566860  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:20.066367  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:20.567118  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:21.067193  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:21.566937  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:22.066333  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:22.567056  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:23.066988  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:23.566331  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:24.066265  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:24.566513  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:25.067048  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:25.567212  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:26.067158  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:26.566324  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:27.066325  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:27.566435  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:28.067014  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:28.566560  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:29.066490  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:29.567080  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:30.067132  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:30.566495  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:31.066973  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:31.566321  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:32.067212  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:32.566665  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:33.066716  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:33.566326  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:34.067017  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:34.566429  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:35.067039  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:35.566936  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:36.066553  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:36.566402  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:37.066800  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:37.566267  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:38.066188  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:38.567060  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:28:38.567180  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:28:38.614003  459056 cri.go:89] found id: ""
	I0510 19:28:38.614094  459056 logs.go:282] 0 containers: []
	W0510 19:28:38.614120  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:28:38.614132  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:28:38.614211  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:28:38.651000  459056 cri.go:89] found id: ""
	I0510 19:28:38.651034  459056 logs.go:282] 0 containers: []
	W0510 19:28:38.651046  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:28:38.651053  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:28:38.651121  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:28:38.688211  459056 cri.go:89] found id: ""
	I0510 19:28:38.688238  459056 logs.go:282] 0 containers: []
	W0510 19:28:38.688246  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:28:38.688252  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:28:38.688318  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:28:38.726904  459056 cri.go:89] found id: ""
	I0510 19:28:38.726933  459056 logs.go:282] 0 containers: []
	W0510 19:28:38.726953  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:28:38.726963  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:28:38.727020  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:28:38.764293  459056 cri.go:89] found id: ""
	I0510 19:28:38.764321  459056 logs.go:282] 0 containers: []
	W0510 19:28:38.764330  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:28:38.764335  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:28:38.764390  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:28:38.802044  459056 cri.go:89] found id: ""
	I0510 19:28:38.802075  459056 logs.go:282] 0 containers: []
	W0510 19:28:38.802083  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:28:38.802104  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:28:38.802160  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:28:38.840951  459056 cri.go:89] found id: ""
	I0510 19:28:38.840991  459056 logs.go:282] 0 containers: []
	W0510 19:28:38.841002  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:28:38.841010  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:28:38.841098  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:28:38.879478  459056 cri.go:89] found id: ""
	I0510 19:28:38.879514  459056 logs.go:282] 0 containers: []
	W0510 19:28:38.879522  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:28:38.879533  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:28:38.879548  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:28:38.932148  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:28:38.932193  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:28:38.947813  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:28:38.947845  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:28:39.094230  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:28:39.094264  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:28:39.094283  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:28:39.170356  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:28:39.170406  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:28:41.716545  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:41.734713  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:28:41.734791  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:28:41.772135  459056 cri.go:89] found id: ""
	I0510 19:28:41.772178  459056 logs.go:282] 0 containers: []
	W0510 19:28:41.772187  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:28:41.772193  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:28:41.772246  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:28:41.810841  459056 cri.go:89] found id: ""
	I0510 19:28:41.810875  459056 logs.go:282] 0 containers: []
	W0510 19:28:41.810886  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:28:41.810893  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:28:41.810969  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:28:41.848600  459056 cri.go:89] found id: ""
	I0510 19:28:41.848627  459056 logs.go:282] 0 containers: []
	W0510 19:28:41.848636  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:28:41.848643  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:28:41.848735  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:28:41.887214  459056 cri.go:89] found id: ""
	I0510 19:28:41.887261  459056 logs.go:282] 0 containers: []
	W0510 19:28:41.887273  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:28:41.887282  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:28:41.887353  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:28:41.926422  459056 cri.go:89] found id: ""
	I0510 19:28:41.926455  459056 logs.go:282] 0 containers: []
	W0510 19:28:41.926466  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:28:41.926474  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:28:41.926573  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:28:41.963547  459056 cri.go:89] found id: ""
	I0510 19:28:41.963582  459056 logs.go:282] 0 containers: []
	W0510 19:28:41.963595  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:28:41.963625  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:28:41.963699  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:28:42.007903  459056 cri.go:89] found id: ""
	I0510 19:28:42.007930  459056 logs.go:282] 0 containers: []
	W0510 19:28:42.007938  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:28:42.007943  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:28:42.007996  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:28:42.048020  459056 cri.go:89] found id: ""
	I0510 19:28:42.048054  459056 logs.go:282] 0 containers: []
	W0510 19:28:42.048062  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:28:42.048072  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:28:42.048085  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:28:42.099210  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:28:42.099267  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:28:42.114915  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:28:42.114947  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:28:42.196330  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:28:42.196364  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:28:42.196380  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:28:42.278729  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:28:42.278786  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:28:44.825880  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:44.844164  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:28:44.844258  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:28:44.883963  459056 cri.go:89] found id: ""
	I0510 19:28:44.883992  459056 logs.go:282] 0 containers: []
	W0510 19:28:44.884001  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:28:44.884008  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:28:44.884085  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:28:44.920183  459056 cri.go:89] found id: ""
	I0510 19:28:44.920214  459056 logs.go:282] 0 containers: []
	W0510 19:28:44.920222  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:28:44.920228  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:28:44.920304  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:28:44.956038  459056 cri.go:89] found id: ""
	I0510 19:28:44.956072  459056 logs.go:282] 0 containers: []
	W0510 19:28:44.956087  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:28:44.956093  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:28:44.956165  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:28:44.992412  459056 cri.go:89] found id: ""
	I0510 19:28:44.992448  459056 logs.go:282] 0 containers: []
	W0510 19:28:44.992460  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:28:44.992468  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:28:44.992540  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:28:45.029970  459056 cri.go:89] found id: ""
	I0510 19:28:45.030008  459056 logs.go:282] 0 containers: []
	W0510 19:28:45.030020  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:28:45.030027  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:28:45.030097  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:28:45.065606  459056 cri.go:89] found id: ""
	I0510 19:28:45.065643  459056 logs.go:282] 0 containers: []
	W0510 19:28:45.065654  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:28:45.065662  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:28:45.065736  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:28:45.102978  459056 cri.go:89] found id: ""
	I0510 19:28:45.103009  459056 logs.go:282] 0 containers: []
	W0510 19:28:45.103018  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:28:45.103024  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:28:45.103087  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:28:45.143725  459056 cri.go:89] found id: ""
	I0510 19:28:45.143752  459056 logs.go:282] 0 containers: []
	W0510 19:28:45.143761  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:28:45.143771  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:28:45.143783  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:28:45.187406  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:28:45.187443  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:28:45.237672  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:28:45.237725  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:28:45.253387  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:28:45.253425  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:28:45.326218  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:28:45.326246  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:28:45.326265  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:28:47.904696  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:47.922232  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:28:47.922326  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:28:47.964247  459056 cri.go:89] found id: ""
	I0510 19:28:47.964284  459056 logs.go:282] 0 containers: []
	W0510 19:28:47.964293  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:28:47.964299  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:28:47.964358  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:28:48.001130  459056 cri.go:89] found id: ""
	I0510 19:28:48.001159  459056 logs.go:282] 0 containers: []
	W0510 19:28:48.001167  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:28:48.001175  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:28:48.001245  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:28:48.038486  459056 cri.go:89] found id: ""
	I0510 19:28:48.038519  459056 logs.go:282] 0 containers: []
	W0510 19:28:48.038528  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:28:48.038534  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:28:48.038604  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:28:48.073594  459056 cri.go:89] found id: ""
	I0510 19:28:48.073628  459056 logs.go:282] 0 containers: []
	W0510 19:28:48.073636  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:28:48.073643  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:28:48.073716  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:28:48.113159  459056 cri.go:89] found id: ""
	I0510 19:28:48.113191  459056 logs.go:282] 0 containers: []
	W0510 19:28:48.113199  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:28:48.113205  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:28:48.113271  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:28:48.158534  459056 cri.go:89] found id: ""
	I0510 19:28:48.158570  459056 logs.go:282] 0 containers: []
	W0510 19:28:48.158581  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:28:48.158589  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:28:48.158661  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:28:48.194840  459056 cri.go:89] found id: ""
	I0510 19:28:48.194871  459056 logs.go:282] 0 containers: []
	W0510 19:28:48.194883  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:28:48.194889  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:28:48.194952  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:28:48.233411  459056 cri.go:89] found id: ""
	I0510 19:28:48.233446  459056 logs.go:282] 0 containers: []
	W0510 19:28:48.233455  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:28:48.233465  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:28:48.233481  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:28:48.248955  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:28:48.248988  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:28:48.321462  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:28:48.321486  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:28:48.321499  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:28:48.413091  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:28:48.413139  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:28:48.455370  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:28:48.455417  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:28:51.008549  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:51.026088  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:28:51.026175  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:28:51.065801  459056 cri.go:89] found id: ""
	I0510 19:28:51.065834  459056 logs.go:282] 0 containers: []
	W0510 19:28:51.065844  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:28:51.065850  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:28:51.065915  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:28:51.108971  459056 cri.go:89] found id: ""
	I0510 19:28:51.109002  459056 logs.go:282] 0 containers: []
	W0510 19:28:51.109010  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:28:51.109017  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:28:51.109081  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:28:51.153399  459056 cri.go:89] found id: ""
	I0510 19:28:51.153425  459056 logs.go:282] 0 containers: []
	W0510 19:28:51.153434  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:28:51.153440  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:28:51.153501  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:28:51.193120  459056 cri.go:89] found id: ""
	I0510 19:28:51.193150  459056 logs.go:282] 0 containers: []
	W0510 19:28:51.193159  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:28:51.193165  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:28:51.193219  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:28:51.232126  459056 cri.go:89] found id: ""
	I0510 19:28:51.232160  459056 logs.go:282] 0 containers: []
	W0510 19:28:51.232169  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:28:51.232176  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:28:51.232262  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:28:51.271265  459056 cri.go:89] found id: ""
	I0510 19:28:51.271292  459056 logs.go:282] 0 containers: []
	W0510 19:28:51.271300  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:28:51.271306  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:28:51.271380  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:28:51.314653  459056 cri.go:89] found id: ""
	I0510 19:28:51.314687  459056 logs.go:282] 0 containers: []
	W0510 19:28:51.314698  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:28:51.314710  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:28:51.314788  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:28:51.353697  459056 cri.go:89] found id: ""
	I0510 19:28:51.353726  459056 logs.go:282] 0 containers: []
	W0510 19:28:51.353734  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:28:51.353746  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:28:51.353762  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:28:51.406474  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:28:51.406515  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:28:51.423057  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:28:51.423092  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:28:51.501527  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:28:51.501551  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:28:51.501563  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:28:51.582228  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:28:51.582278  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:28:54.132967  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:54.161653  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:28:54.161729  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:28:54.201063  459056 cri.go:89] found id: ""
	I0510 19:28:54.201098  459056 logs.go:282] 0 containers: []
	W0510 19:28:54.201111  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:28:54.201120  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:28:54.201200  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:28:54.241268  459056 cri.go:89] found id: ""
	I0510 19:28:54.241298  459056 logs.go:282] 0 containers: []
	W0510 19:28:54.241307  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:28:54.241320  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:28:54.241388  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:28:54.279508  459056 cri.go:89] found id: ""
	I0510 19:28:54.279540  459056 logs.go:282] 0 containers: []
	W0510 19:28:54.279549  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:28:54.279555  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:28:54.279621  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:28:54.322256  459056 cri.go:89] found id: ""
	I0510 19:28:54.322295  459056 logs.go:282] 0 containers: []
	W0510 19:28:54.322306  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:28:54.322349  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:28:54.322423  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:28:54.360014  459056 cri.go:89] found id: ""
	I0510 19:28:54.360051  459056 logs.go:282] 0 containers: []
	W0510 19:28:54.360062  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:28:54.360071  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:28:54.360149  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:28:54.399429  459056 cri.go:89] found id: ""
	I0510 19:28:54.399462  459056 logs.go:282] 0 containers: []
	W0510 19:28:54.399473  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:28:54.399479  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:28:54.399544  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:28:54.437094  459056 cri.go:89] found id: ""
	I0510 19:28:54.437120  459056 logs.go:282] 0 containers: []
	W0510 19:28:54.437129  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:28:54.437135  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:28:54.437213  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:28:54.473964  459056 cri.go:89] found id: ""
	I0510 19:28:54.474000  459056 logs.go:282] 0 containers: []
	W0510 19:28:54.474012  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:28:54.474024  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:28:54.474037  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:28:54.526415  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:28:54.526458  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:28:54.542142  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:28:54.542177  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:28:54.618555  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:28:54.618582  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:28:54.618600  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:28:54.695979  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:28:54.696026  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:28:57.241583  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:57.259270  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:28:57.259347  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:28:57.297603  459056 cri.go:89] found id: ""
	I0510 19:28:57.297640  459056 logs.go:282] 0 containers: []
	W0510 19:28:57.297648  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:28:57.297664  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:28:57.297734  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:28:57.339031  459056 cri.go:89] found id: ""
	I0510 19:28:57.339063  459056 logs.go:282] 0 containers: []
	W0510 19:28:57.339072  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:28:57.339090  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:28:57.339167  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:28:57.375753  459056 cri.go:89] found id: ""
	I0510 19:28:57.375783  459056 logs.go:282] 0 containers: []
	W0510 19:28:57.375792  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:28:57.375799  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:28:57.375855  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:28:57.414729  459056 cri.go:89] found id: ""
	I0510 19:28:57.414758  459056 logs.go:282] 0 containers: []
	W0510 19:28:57.414770  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:28:57.414779  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:28:57.414854  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:28:57.453265  459056 cri.go:89] found id: ""
	I0510 19:28:57.453298  459056 logs.go:282] 0 containers: []
	W0510 19:28:57.453309  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:28:57.453318  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:28:57.453379  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:28:57.491548  459056 cri.go:89] found id: ""
	I0510 19:28:57.491579  459056 logs.go:282] 0 containers: []
	W0510 19:28:57.491587  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:28:57.491594  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:28:57.491670  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:28:57.529795  459056 cri.go:89] found id: ""
	I0510 19:28:57.529822  459056 logs.go:282] 0 containers: []
	W0510 19:28:57.529831  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:28:57.529837  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:28:57.529901  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:28:57.570146  459056 cri.go:89] found id: ""
	I0510 19:28:57.570177  459056 logs.go:282] 0 containers: []
	W0510 19:28:57.570186  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:28:57.570196  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:28:57.570211  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:28:57.622879  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:28:57.622928  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:28:57.639210  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:28:57.639256  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:28:57.717348  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:28:57.717382  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:28:57.717399  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:28:57.799663  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:28:57.799716  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:00.351909  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:00.369231  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:00.369300  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:00.419696  459056 cri.go:89] found id: ""
	I0510 19:29:00.419730  459056 logs.go:282] 0 containers: []
	W0510 19:29:00.419740  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:00.419747  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:00.419810  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:00.456741  459056 cri.go:89] found id: ""
	I0510 19:29:00.456847  459056 logs.go:282] 0 containers: []
	W0510 19:29:00.456865  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:00.456874  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:00.456956  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:00.495771  459056 cri.go:89] found id: ""
	I0510 19:29:00.495816  459056 logs.go:282] 0 containers: []
	W0510 19:29:00.495829  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:00.495839  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:00.495919  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:00.541754  459056 cri.go:89] found id: ""
	I0510 19:29:00.541791  459056 logs.go:282] 0 containers: []
	W0510 19:29:00.541803  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:00.541812  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:00.541892  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:00.584200  459056 cri.go:89] found id: ""
	I0510 19:29:00.584230  459056 logs.go:282] 0 containers: []
	W0510 19:29:00.584239  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:00.584245  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:00.584336  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:00.632920  459056 cri.go:89] found id: ""
	I0510 19:29:00.632949  459056 logs.go:282] 0 containers: []
	W0510 19:29:00.632960  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:00.632969  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:00.633033  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:00.684270  459056 cri.go:89] found id: ""
	I0510 19:29:00.684300  459056 logs.go:282] 0 containers: []
	W0510 19:29:00.684309  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:00.684315  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:00.684368  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:00.722259  459056 cri.go:89] found id: ""
	I0510 19:29:00.722292  459056 logs.go:282] 0 containers: []
	W0510 19:29:00.722301  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:00.722311  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:00.722328  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:00.737395  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:00.737431  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:00.816432  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:00.816465  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:00.816485  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:00.900576  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:00.900631  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:00.946239  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:00.946285  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:03.499135  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:03.516795  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:03.516874  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:03.561554  459056 cri.go:89] found id: ""
	I0510 19:29:03.561589  459056 logs.go:282] 0 containers: []
	W0510 19:29:03.561599  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:03.561607  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:03.561674  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:03.604183  459056 cri.go:89] found id: ""
	I0510 19:29:03.604213  459056 logs.go:282] 0 containers: []
	W0510 19:29:03.604221  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:03.604227  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:03.604297  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:03.641319  459056 cri.go:89] found id: ""
	I0510 19:29:03.641350  459056 logs.go:282] 0 containers: []
	W0510 19:29:03.641359  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:03.641366  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:03.641431  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:03.679306  459056 cri.go:89] found id: ""
	I0510 19:29:03.679345  459056 logs.go:282] 0 containers: []
	W0510 19:29:03.679356  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:03.679364  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:03.679444  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:03.720380  459056 cri.go:89] found id: ""
	I0510 19:29:03.720412  459056 logs.go:282] 0 containers: []
	W0510 19:29:03.720420  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:03.720426  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:03.720497  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:03.758115  459056 cri.go:89] found id: ""
	I0510 19:29:03.758183  459056 logs.go:282] 0 containers: []
	W0510 19:29:03.758193  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:03.758206  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:03.758283  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:03.797182  459056 cri.go:89] found id: ""
	I0510 19:29:03.797215  459056 logs.go:282] 0 containers: []
	W0510 19:29:03.797226  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:03.797235  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:03.797294  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:03.837236  459056 cri.go:89] found id: ""
	I0510 19:29:03.837266  459056 logs.go:282] 0 containers: []
	W0510 19:29:03.837274  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:03.837284  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:03.837302  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:03.886362  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:03.886412  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:03.902546  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:03.902581  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:03.980181  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:03.980206  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:03.980219  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:04.060587  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:04.060641  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:06.606310  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:06.633919  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:06.634001  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:06.672938  459056 cri.go:89] found id: ""
	I0510 19:29:06.672969  459056 logs.go:282] 0 containers: []
	W0510 19:29:06.672978  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:06.672986  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:06.673047  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:06.711567  459056 cri.go:89] found id: ""
	I0510 19:29:06.711603  459056 logs.go:282] 0 containers: []
	W0510 19:29:06.711615  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:06.711624  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:06.711710  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:06.752456  459056 cri.go:89] found id: ""
	I0510 19:29:06.752498  459056 logs.go:282] 0 containers: []
	W0510 19:29:06.752510  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:06.752520  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:06.752592  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:06.792722  459056 cri.go:89] found id: ""
	I0510 19:29:06.792755  459056 logs.go:282] 0 containers: []
	W0510 19:29:06.792764  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:06.792771  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:06.792832  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:06.833199  459056 cri.go:89] found id: ""
	I0510 19:29:06.833231  459056 logs.go:282] 0 containers: []
	W0510 19:29:06.833239  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:06.833246  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:06.833300  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:06.871347  459056 cri.go:89] found id: ""
	I0510 19:29:06.871378  459056 logs.go:282] 0 containers: []
	W0510 19:29:06.871386  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:06.871393  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:06.871448  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:06.909447  459056 cri.go:89] found id: ""
	I0510 19:29:06.909478  459056 logs.go:282] 0 containers: []
	W0510 19:29:06.909489  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:06.909497  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:06.909561  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:06.945795  459056 cri.go:89] found id: ""
	I0510 19:29:06.945829  459056 logs.go:282] 0 containers: []
	W0510 19:29:06.945837  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:06.945847  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:06.945861  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:07.028777  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:07.028825  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:07.070640  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:07.070673  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:07.124335  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:07.124383  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:07.140167  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:07.140197  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:07.218319  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:09.718885  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:09.737619  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:09.737701  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:09.775164  459056 cri.go:89] found id: ""
	I0510 19:29:09.775203  459056 logs.go:282] 0 containers: []
	W0510 19:29:09.775211  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:09.775218  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:09.775292  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:09.819357  459056 cri.go:89] found id: ""
	I0510 19:29:09.819395  459056 logs.go:282] 0 containers: []
	W0510 19:29:09.819406  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:09.819415  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:09.819490  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:09.858894  459056 cri.go:89] found id: ""
	I0510 19:29:09.858928  459056 logs.go:282] 0 containers: []
	W0510 19:29:09.858937  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:09.858942  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:09.858996  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:09.895496  459056 cri.go:89] found id: ""
	I0510 19:29:09.895543  459056 logs.go:282] 0 containers: []
	W0510 19:29:09.895554  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:09.895562  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:09.895629  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:09.935443  459056 cri.go:89] found id: ""
	I0510 19:29:09.935476  459056 logs.go:282] 0 containers: []
	W0510 19:29:09.935484  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:09.935490  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:09.935552  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:09.975013  459056 cri.go:89] found id: ""
	I0510 19:29:09.975050  459056 logs.go:282] 0 containers: []
	W0510 19:29:09.975059  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:09.975066  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:09.975122  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:10.017332  459056 cri.go:89] found id: ""
	I0510 19:29:10.017364  459056 logs.go:282] 0 containers: []
	W0510 19:29:10.017372  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:10.017378  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:10.017432  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:10.054109  459056 cri.go:89] found id: ""
	I0510 19:29:10.054145  459056 logs.go:282] 0 containers: []
	W0510 19:29:10.054157  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:10.054169  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:10.054187  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:10.107219  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:10.107275  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:10.122900  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:10.122946  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:10.197374  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:10.197402  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:10.197423  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:10.276176  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:10.276222  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:12.822189  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:12.839516  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:12.839586  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:12.876495  459056 cri.go:89] found id: ""
	I0510 19:29:12.876532  459056 logs.go:282] 0 containers: []
	W0510 19:29:12.876544  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:12.876553  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:12.876628  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:12.914537  459056 cri.go:89] found id: ""
	I0510 19:29:12.914571  459056 logs.go:282] 0 containers: []
	W0510 19:29:12.914581  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:12.914587  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:12.914662  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:12.953369  459056 cri.go:89] found id: ""
	I0510 19:29:12.953403  459056 logs.go:282] 0 containers: []
	W0510 19:29:12.953412  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:12.953418  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:12.953475  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:12.991117  459056 cri.go:89] found id: ""
	I0510 19:29:12.991150  459056 logs.go:282] 0 containers: []
	W0510 19:29:12.991159  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:12.991167  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:12.991226  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:13.035209  459056 cri.go:89] found id: ""
	I0510 19:29:13.035268  459056 logs.go:282] 0 containers: []
	W0510 19:29:13.035281  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:13.035290  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:13.035364  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:13.072156  459056 cri.go:89] found id: ""
	I0510 19:29:13.072191  459056 logs.go:282] 0 containers: []
	W0510 19:29:13.072203  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:13.072211  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:13.072279  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:13.108863  459056 cri.go:89] found id: ""
	I0510 19:29:13.108893  459056 logs.go:282] 0 containers: []
	W0510 19:29:13.108903  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:13.108910  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:13.108967  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:13.155406  459056 cri.go:89] found id: ""
	I0510 19:29:13.155437  459056 logs.go:282] 0 containers: []
	W0510 19:29:13.155445  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:13.155455  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:13.155467  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:13.208638  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:13.208694  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:13.225071  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:13.225107  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:13.300472  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:13.300498  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:13.300515  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:13.380669  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:13.380714  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:15.924108  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:15.941384  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:15.941465  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:15.984230  459056 cri.go:89] found id: ""
	I0510 19:29:15.984259  459056 logs.go:282] 0 containers: []
	W0510 19:29:15.984267  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:15.984273  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:15.984328  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:16.022696  459056 cri.go:89] found id: ""
	I0510 19:29:16.022725  459056 logs.go:282] 0 containers: []
	W0510 19:29:16.022733  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:16.022740  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:16.022818  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:16.064311  459056 cri.go:89] found id: ""
	I0510 19:29:16.064344  459056 logs.go:282] 0 containers: []
	W0510 19:29:16.064356  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:16.064364  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:16.064432  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:16.110646  459056 cri.go:89] found id: ""
	I0510 19:29:16.110680  459056 logs.go:282] 0 containers: []
	W0510 19:29:16.110688  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:16.110695  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:16.110779  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:16.155423  459056 cri.go:89] found id: ""
	I0510 19:29:16.155466  459056 logs.go:282] 0 containers: []
	W0510 19:29:16.155478  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:16.155485  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:16.155560  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:16.199404  459056 cri.go:89] found id: ""
	I0510 19:29:16.199437  459056 logs.go:282] 0 containers: []
	W0510 19:29:16.199445  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:16.199455  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:16.199518  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:16.244501  459056 cri.go:89] found id: ""
	I0510 19:29:16.244532  459056 logs.go:282] 0 containers: []
	W0510 19:29:16.244541  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:16.244547  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:16.244622  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:16.289564  459056 cri.go:89] found id: ""
	I0510 19:29:16.289594  459056 logs.go:282] 0 containers: []
	W0510 19:29:16.289609  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:16.289628  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:16.289645  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:16.339326  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:16.339360  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:16.392002  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:16.392050  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:16.408009  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:16.408039  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:16.480932  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:16.480959  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:16.480972  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:19.062321  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:19.079587  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:19.079667  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:19.122776  459056 cri.go:89] found id: ""
	I0510 19:29:19.122809  459056 logs.go:282] 0 containers: []
	W0510 19:29:19.122817  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:19.122823  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:19.122882  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:19.160116  459056 cri.go:89] found id: ""
	I0510 19:29:19.160154  459056 logs.go:282] 0 containers: []
	W0510 19:29:19.160166  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:19.160175  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:19.160258  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:19.198049  459056 cri.go:89] found id: ""
	I0510 19:29:19.198081  459056 logs.go:282] 0 containers: []
	W0510 19:29:19.198089  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:19.198095  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:19.198151  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:19.236547  459056 cri.go:89] found id: ""
	I0510 19:29:19.236578  459056 logs.go:282] 0 containers: []
	W0510 19:29:19.236587  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:19.236596  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:19.236682  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:19.274688  459056 cri.go:89] found id: ""
	I0510 19:29:19.274727  459056 logs.go:282] 0 containers: []
	W0510 19:29:19.274738  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:19.274746  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:19.274819  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:19.317813  459056 cri.go:89] found id: ""
	I0510 19:29:19.317843  459056 logs.go:282] 0 containers: []
	W0510 19:29:19.317853  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:19.317865  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:19.317934  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:19.360619  459056 cri.go:89] found id: ""
	I0510 19:29:19.360654  459056 logs.go:282] 0 containers: []
	W0510 19:29:19.360663  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:19.360669  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:19.360735  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:19.399001  459056 cri.go:89] found id: ""
	I0510 19:29:19.399030  459056 logs.go:282] 0 containers: []
	W0510 19:29:19.399038  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:19.399048  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:19.399061  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:19.482768  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:19.482819  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:19.525273  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:19.525316  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:19.579149  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:19.579197  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:19.594813  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:19.594853  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:19.667950  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:22.169701  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:22.187665  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:22.187746  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:22.227992  459056 cri.go:89] found id: ""
	I0510 19:29:22.228022  459056 logs.go:282] 0 containers: []
	W0510 19:29:22.228030  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:22.228041  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:22.228164  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:22.267106  459056 cri.go:89] found id: ""
	I0510 19:29:22.267140  459056 logs.go:282] 0 containers: []
	W0510 19:29:22.267149  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:22.267155  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:22.267211  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:22.305600  459056 cri.go:89] found id: ""
	I0510 19:29:22.305628  459056 logs.go:282] 0 containers: []
	W0510 19:29:22.305636  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:22.305643  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:22.305711  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:22.345673  459056 cri.go:89] found id: ""
	I0510 19:29:22.345708  459056 logs.go:282] 0 containers: []
	W0510 19:29:22.345719  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:22.345724  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:22.345778  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:22.384325  459056 cri.go:89] found id: ""
	I0510 19:29:22.384358  459056 logs.go:282] 0 containers: []
	W0510 19:29:22.384371  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:22.384387  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:22.384467  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:22.424747  459056 cri.go:89] found id: ""
	I0510 19:29:22.424779  459056 logs.go:282] 0 containers: []
	W0510 19:29:22.424787  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:22.424794  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:22.424848  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:22.470878  459056 cri.go:89] found id: ""
	I0510 19:29:22.470916  459056 logs.go:282] 0 containers: []
	W0510 19:29:22.470929  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:22.470937  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:22.471010  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:22.515651  459056 cri.go:89] found id: ""
	I0510 19:29:22.515682  459056 logs.go:282] 0 containers: []
	W0510 19:29:22.515693  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:22.515713  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:22.515730  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:22.573654  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:22.573699  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:22.590599  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:22.590637  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:22.670834  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:22.670866  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:22.670882  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:22.754958  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:22.755019  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:25.299898  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:25.317959  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:25.318047  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:25.358952  459056 cri.go:89] found id: ""
	I0510 19:29:25.358990  459056 logs.go:282] 0 containers: []
	W0510 19:29:25.358999  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:25.359005  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:25.359068  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:25.402269  459056 cri.go:89] found id: ""
	I0510 19:29:25.402300  459056 logs.go:282] 0 containers: []
	W0510 19:29:25.402308  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:25.402321  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:25.402402  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:25.441309  459056 cri.go:89] found id: ""
	I0510 19:29:25.441338  459056 logs.go:282] 0 containers: []
	W0510 19:29:25.441348  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:25.441357  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:25.441421  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:25.477026  459056 cri.go:89] found id: ""
	I0510 19:29:25.477073  459056 logs.go:282] 0 containers: []
	W0510 19:29:25.477087  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:25.477095  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:25.477168  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:25.514227  459056 cri.go:89] found id: ""
	I0510 19:29:25.514263  459056 logs.go:282] 0 containers: []
	W0510 19:29:25.514274  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:25.514283  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:25.514357  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:25.552961  459056 cri.go:89] found id: ""
	I0510 19:29:25.552993  459056 logs.go:282] 0 containers: []
	W0510 19:29:25.553002  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:25.553010  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:25.553075  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:25.591284  459056 cri.go:89] found id: ""
	I0510 19:29:25.591315  459056 logs.go:282] 0 containers: []
	W0510 19:29:25.591327  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:25.591336  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:25.591404  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:25.631688  459056 cri.go:89] found id: ""
	I0510 19:29:25.631720  459056 logs.go:282] 0 containers: []
	W0510 19:29:25.631728  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:25.631737  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:25.631750  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:25.686015  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:25.686057  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:25.702233  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:25.702271  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:25.777340  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:25.777373  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:25.777389  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:25.857072  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:25.857118  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:28.400902  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:28.418498  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:28.418570  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:28.454908  459056 cri.go:89] found id: ""
	I0510 19:29:28.454941  459056 logs.go:282] 0 containers: []
	W0510 19:29:28.454950  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:28.454956  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:28.455014  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:28.493646  459056 cri.go:89] found id: ""
	I0510 19:29:28.493682  459056 logs.go:282] 0 containers: []
	W0510 19:29:28.493691  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:28.493700  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:28.493766  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:28.531482  459056 cri.go:89] found id: ""
	I0510 19:29:28.531524  459056 logs.go:282] 0 containers: []
	W0510 19:29:28.531537  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:28.531546  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:28.531618  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:28.568042  459056 cri.go:89] found id: ""
	I0510 19:29:28.568078  459056 logs.go:282] 0 containers: []
	W0510 19:29:28.568087  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:28.568093  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:28.568150  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:28.607141  459056 cri.go:89] found id: ""
	I0510 19:29:28.607172  459056 logs.go:282] 0 containers: []
	W0510 19:29:28.607181  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:28.607187  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:28.607271  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:28.645485  459056 cri.go:89] found id: ""
	I0510 19:29:28.645519  459056 logs.go:282] 0 containers: []
	W0510 19:29:28.645532  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:28.645544  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:28.645618  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:28.685596  459056 cri.go:89] found id: ""
	I0510 19:29:28.685638  459056 logs.go:282] 0 containers: []
	W0510 19:29:28.685649  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:28.685657  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:28.685724  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:28.724977  459056 cri.go:89] found id: ""
	I0510 19:29:28.725005  459056 logs.go:282] 0 containers: []
	W0510 19:29:28.725013  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:28.725023  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:28.725101  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:28.777421  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:28.777476  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:28.793767  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:28.793806  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:28.865581  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:28.865611  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:28.865638  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:28.945845  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:28.945895  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:31.491500  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:31.508822  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:31.508896  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:31.546371  459056 cri.go:89] found id: ""
	I0510 19:29:31.546400  459056 logs.go:282] 0 containers: []
	W0510 19:29:31.546412  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:31.546420  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:31.546478  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:31.588214  459056 cri.go:89] found id: ""
	I0510 19:29:31.588244  459056 logs.go:282] 0 containers: []
	W0510 19:29:31.588252  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:31.588258  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:31.588313  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:31.626683  459056 cri.go:89] found id: ""
	I0510 19:29:31.626718  459056 logs.go:282] 0 containers: []
	W0510 19:29:31.626729  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:31.626737  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:31.626810  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:31.665979  459056 cri.go:89] found id: ""
	I0510 19:29:31.666013  459056 logs.go:282] 0 containers: []
	W0510 19:29:31.666023  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:31.666030  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:31.666087  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:31.702718  459056 cri.go:89] found id: ""
	I0510 19:29:31.702751  459056 logs.go:282] 0 containers: []
	W0510 19:29:31.702767  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:31.702775  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:31.702830  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:31.740496  459056 cri.go:89] found id: ""
	I0510 19:29:31.740530  459056 logs.go:282] 0 containers: []
	W0510 19:29:31.740553  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:31.740561  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:31.740616  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:31.782178  459056 cri.go:89] found id: ""
	I0510 19:29:31.782209  459056 logs.go:282] 0 containers: []
	W0510 19:29:31.782218  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:31.782224  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:31.782278  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:31.817466  459056 cri.go:89] found id: ""
	I0510 19:29:31.817495  459056 logs.go:282] 0 containers: []
	W0510 19:29:31.817503  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:31.817512  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:31.817527  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:31.832641  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:31.832675  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:31.913719  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:31.913745  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:31.913764  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:31.990267  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:31.990316  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:32.033353  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:32.033384  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:34.586504  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:34.606546  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:34.606628  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:34.644492  459056 cri.go:89] found id: ""
	I0510 19:29:34.644526  459056 logs.go:282] 0 containers: []
	W0510 19:29:34.644539  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:34.644547  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:34.644616  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:34.684520  459056 cri.go:89] found id: ""
	I0510 19:29:34.684550  459056 logs.go:282] 0 containers: []
	W0510 19:29:34.684566  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:34.684572  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:34.684627  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:34.722015  459056 cri.go:89] found id: ""
	I0510 19:29:34.722047  459056 logs.go:282] 0 containers: []
	W0510 19:29:34.722055  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:34.722062  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:34.722118  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:34.760175  459056 cri.go:89] found id: ""
	I0510 19:29:34.760203  459056 logs.go:282] 0 containers: []
	W0510 19:29:34.760212  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:34.760219  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:34.760291  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:34.797742  459056 cri.go:89] found id: ""
	I0510 19:29:34.797775  459056 logs.go:282] 0 containers: []
	W0510 19:29:34.797787  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:34.797796  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:34.797870  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:34.834792  459056 cri.go:89] found id: ""
	I0510 19:29:34.834824  459056 logs.go:282] 0 containers: []
	W0510 19:29:34.834832  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:34.834839  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:34.834905  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:34.881683  459056 cri.go:89] found id: ""
	I0510 19:29:34.881720  459056 logs.go:282] 0 containers: []
	W0510 19:29:34.881729  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:34.881738  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:34.881815  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:34.925574  459056 cri.go:89] found id: ""
	I0510 19:29:34.925605  459056 logs.go:282] 0 containers: []
	W0510 19:29:34.925613  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:34.925622  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:34.925636  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:34.977426  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:34.977477  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:34.993190  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:34.993226  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:35.071565  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:35.071590  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:35.071604  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:35.149510  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:35.149563  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:37.697052  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:37.714716  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:37.714828  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:37.752850  459056 cri.go:89] found id: ""
	I0510 19:29:37.752896  459056 logs.go:282] 0 containers: []
	W0510 19:29:37.752909  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:37.752916  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:37.752989  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:37.791810  459056 cri.go:89] found id: ""
	I0510 19:29:37.791847  459056 logs.go:282] 0 containers: []
	W0510 19:29:37.791860  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:37.791868  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:37.791929  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:37.831622  459056 cri.go:89] found id: ""
	I0510 19:29:37.831658  459056 logs.go:282] 0 containers: []
	W0510 19:29:37.831669  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:37.831677  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:37.831755  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:37.873390  459056 cri.go:89] found id: ""
	I0510 19:29:37.873419  459056 logs.go:282] 0 containers: []
	W0510 19:29:37.873427  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:37.873434  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:37.873493  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:37.915385  459056 cri.go:89] found id: ""
	I0510 19:29:37.915421  459056 logs.go:282] 0 containers: []
	W0510 19:29:37.915431  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:37.915439  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:37.915517  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:37.953620  459056 cri.go:89] found id: ""
	I0510 19:29:37.953654  459056 logs.go:282] 0 containers: []
	W0510 19:29:37.953666  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:37.953678  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:37.953772  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:37.991282  459056 cri.go:89] found id: ""
	I0510 19:29:37.991315  459056 logs.go:282] 0 containers: []
	W0510 19:29:37.991328  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:37.991338  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:37.991413  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:38.028482  459056 cri.go:89] found id: ""
	I0510 19:29:38.028520  459056 logs.go:282] 0 containers: []
	W0510 19:29:38.028531  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:38.028545  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:38.028563  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:38.083448  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:38.083506  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:38.099016  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:38.099067  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:38.174538  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:38.174572  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:38.174587  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:38.258394  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:38.258443  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:40.803473  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:40.821814  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:40.821912  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:40.860566  459056 cri.go:89] found id: ""
	I0510 19:29:40.860600  459056 logs.go:282] 0 containers: []
	W0510 19:29:40.860612  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:40.860622  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:40.860683  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:40.897132  459056 cri.go:89] found id: ""
	I0510 19:29:40.897161  459056 logs.go:282] 0 containers: []
	W0510 19:29:40.897169  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:40.897177  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:40.897239  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:40.944583  459056 cri.go:89] found id: ""
	I0510 19:29:40.944622  459056 logs.go:282] 0 containers: []
	W0510 19:29:40.944636  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:40.944645  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:40.944715  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:40.983132  459056 cri.go:89] found id: ""
	I0510 19:29:40.983165  459056 logs.go:282] 0 containers: []
	W0510 19:29:40.983176  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:40.983185  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:40.983283  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:41.020441  459056 cri.go:89] found id: ""
	I0510 19:29:41.020477  459056 logs.go:282] 0 containers: []
	W0510 19:29:41.020486  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:41.020494  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:41.020548  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:41.058522  459056 cri.go:89] found id: ""
	I0510 19:29:41.058562  459056 logs.go:282] 0 containers: []
	W0510 19:29:41.058572  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:41.058579  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:41.058635  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:41.098730  459056 cri.go:89] found id: ""
	I0510 19:29:41.098775  459056 logs.go:282] 0 containers: []
	W0510 19:29:41.098785  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:41.098792  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:41.098854  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:41.139270  459056 cri.go:89] found id: ""
	I0510 19:29:41.139302  459056 logs.go:282] 0 containers: []
	W0510 19:29:41.139310  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:41.139322  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:41.139335  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:41.215383  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:41.215434  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:41.258268  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:41.258314  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:41.313241  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:41.313287  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:41.332109  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:41.332148  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:41.433376  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:43.935156  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:43.953570  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:43.953694  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:43.994014  459056 cri.go:89] found id: ""
	I0510 19:29:43.994049  459056 logs.go:282] 0 containers: []
	W0510 19:29:43.994075  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:43.994083  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:43.994158  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:44.033884  459056 cri.go:89] found id: ""
	I0510 19:29:44.033922  459056 logs.go:282] 0 containers: []
	W0510 19:29:44.033932  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:44.033942  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:44.033999  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:44.075902  459056 cri.go:89] found id: ""
	I0510 19:29:44.075941  459056 logs.go:282] 0 containers: []
	W0510 19:29:44.075950  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:44.075956  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:44.076018  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:44.116711  459056 cri.go:89] found id: ""
	I0510 19:29:44.116745  459056 logs.go:282] 0 containers: []
	W0510 19:29:44.116757  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:44.116779  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:44.116853  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:44.157617  459056 cri.go:89] found id: ""
	I0510 19:29:44.157652  459056 logs.go:282] 0 containers: []
	W0510 19:29:44.157661  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:44.157668  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:44.157727  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:44.197634  459056 cri.go:89] found id: ""
	I0510 19:29:44.197671  459056 logs.go:282] 0 containers: []
	W0510 19:29:44.197679  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:44.197685  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:44.197743  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:44.235756  459056 cri.go:89] found id: ""
	I0510 19:29:44.235797  459056 logs.go:282] 0 containers: []
	W0510 19:29:44.235810  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:44.235818  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:44.235879  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:44.274251  459056 cri.go:89] found id: ""
	I0510 19:29:44.274292  459056 logs.go:282] 0 containers: []
	W0510 19:29:44.274305  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:44.274317  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:44.274337  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:44.318629  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:44.318669  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:44.370941  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:44.370987  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:44.386660  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:44.386697  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:44.463056  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:44.463085  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:44.463103  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:47.046858  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:47.068619  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:47.068705  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:47.119108  459056 cri.go:89] found id: ""
	I0510 19:29:47.119138  459056 logs.go:282] 0 containers: []
	W0510 19:29:47.119148  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:47.119154  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:47.119210  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:47.160941  459056 cri.go:89] found id: ""
	I0510 19:29:47.160974  459056 logs.go:282] 0 containers: []
	W0510 19:29:47.160982  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:47.160988  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:47.161050  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:47.210420  459056 cri.go:89] found id: ""
	I0510 19:29:47.210452  459056 logs.go:282] 0 containers: []
	W0510 19:29:47.210460  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:47.210466  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:47.210520  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:47.250554  459056 cri.go:89] found id: ""
	I0510 19:29:47.250591  459056 logs.go:282] 0 containers: []
	W0510 19:29:47.250600  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:47.250612  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:47.250674  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:47.290621  459056 cri.go:89] found id: ""
	I0510 19:29:47.290656  459056 logs.go:282] 0 containers: []
	W0510 19:29:47.290667  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:47.290676  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:47.290749  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:47.331044  459056 cri.go:89] found id: ""
	I0510 19:29:47.331079  459056 logs.go:282] 0 containers: []
	W0510 19:29:47.331091  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:47.331100  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:47.331162  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:47.369926  459056 cri.go:89] found id: ""
	I0510 19:29:47.369958  459056 logs.go:282] 0 containers: []
	W0510 19:29:47.369967  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:47.369973  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:47.370047  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:47.410658  459056 cri.go:89] found id: ""
	I0510 19:29:47.410699  459056 logs.go:282] 0 containers: []
	W0510 19:29:47.410708  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:47.410723  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:47.410737  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:47.489045  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:47.489100  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:47.536078  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:47.536117  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:47.588663  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:47.588727  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:47.606182  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:47.606220  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:47.680331  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:50.180849  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:50.198636  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:50.198740  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:50.238270  459056 cri.go:89] found id: ""
	I0510 19:29:50.238301  459056 logs.go:282] 0 containers: []
	W0510 19:29:50.238314  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:50.238323  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:50.238399  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:50.276207  459056 cri.go:89] found id: ""
	I0510 19:29:50.276244  459056 logs.go:282] 0 containers: []
	W0510 19:29:50.276256  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:50.276264  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:50.276333  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:50.311826  459056 cri.go:89] found id: ""
	I0510 19:29:50.311864  459056 logs.go:282] 0 containers: []
	W0510 19:29:50.311875  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:50.311884  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:50.311961  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:50.347100  459056 cri.go:89] found id: ""
	I0510 19:29:50.347133  459056 logs.go:282] 0 containers: []
	W0510 19:29:50.347142  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:50.347151  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:50.347229  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:50.382788  459056 cri.go:89] found id: ""
	I0510 19:29:50.382816  459056 logs.go:282] 0 containers: []
	W0510 19:29:50.382824  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:50.382830  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:50.382898  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:50.420656  459056 cri.go:89] found id: ""
	I0510 19:29:50.420700  459056 logs.go:282] 0 containers: []
	W0510 19:29:50.420709  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:50.420722  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:50.420782  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:50.460911  459056 cri.go:89] found id: ""
	I0510 19:29:50.460948  459056 logs.go:282] 0 containers: []
	W0510 19:29:50.460956  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:50.460962  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:50.461016  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:50.498074  459056 cri.go:89] found id: ""
	I0510 19:29:50.498109  459056 logs.go:282] 0 containers: []
	W0510 19:29:50.498122  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:50.498135  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:50.498152  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:50.576436  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:50.576486  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:50.620554  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:50.620594  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:50.672242  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:50.672292  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:50.688401  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:50.688435  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:50.765125  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:53.266941  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:53.285235  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:53.285306  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:53.327821  459056 cri.go:89] found id: ""
	I0510 19:29:53.327872  459056 logs.go:282] 0 containers: []
	W0510 19:29:53.327880  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:53.327888  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:53.327971  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:53.367170  459056 cri.go:89] found id: ""
	I0510 19:29:53.367212  459056 logs.go:282] 0 containers: []
	W0510 19:29:53.367224  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:53.367254  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:53.367338  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:53.411071  459056 cri.go:89] found id: ""
	I0510 19:29:53.411104  459056 logs.go:282] 0 containers: []
	W0510 19:29:53.411112  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:53.411119  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:53.411194  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:53.451093  459056 cri.go:89] found id: ""
	I0510 19:29:53.451160  459056 logs.go:282] 0 containers: []
	W0510 19:29:53.451175  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:53.451184  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:53.451278  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:53.490305  459056 cri.go:89] found id: ""
	I0510 19:29:53.490337  459056 logs.go:282] 0 containers: []
	W0510 19:29:53.490345  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:53.490351  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:53.490421  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:53.529657  459056 cri.go:89] found id: ""
	I0510 19:29:53.529703  459056 logs.go:282] 0 containers: []
	W0510 19:29:53.529716  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:53.529728  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:53.529801  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:53.570169  459056 cri.go:89] found id: ""
	I0510 19:29:53.570211  459056 logs.go:282] 0 containers: []
	W0510 19:29:53.570223  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:53.570232  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:53.570300  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:53.613547  459056 cri.go:89] found id: ""
	I0510 19:29:53.613576  459056 logs.go:282] 0 containers: []
	W0510 19:29:53.613584  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:53.613593  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:53.613607  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:53.665574  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:53.665633  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:53.682279  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:53.682319  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:53.760795  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:53.760824  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:53.760843  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:53.844386  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:53.844433  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:56.398332  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:56.416456  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:56.416552  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:56.454203  459056 cri.go:89] found id: ""
	I0510 19:29:56.454240  459056 logs.go:282] 0 containers: []
	W0510 19:29:56.454254  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:56.454265  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:56.454350  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:56.492744  459056 cri.go:89] found id: ""
	I0510 19:29:56.492779  459056 logs.go:282] 0 containers: []
	W0510 19:29:56.492791  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:56.492799  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:56.492893  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:56.529891  459056 cri.go:89] found id: ""
	I0510 19:29:56.529924  459056 logs.go:282] 0 containers: []
	W0510 19:29:56.529933  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:56.529943  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:56.530000  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:56.566697  459056 cri.go:89] found id: ""
	I0510 19:29:56.566732  459056 logs.go:282] 0 containers: []
	W0510 19:29:56.566743  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:56.566752  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:56.566816  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:56.608258  459056 cri.go:89] found id: ""
	I0510 19:29:56.608295  459056 logs.go:282] 0 containers: []
	W0510 19:29:56.608307  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:56.608315  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:56.608384  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:56.648700  459056 cri.go:89] found id: ""
	I0510 19:29:56.648734  459056 logs.go:282] 0 containers: []
	W0510 19:29:56.648746  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:56.648755  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:56.648823  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:56.686623  459056 cri.go:89] found id: ""
	I0510 19:29:56.686661  459056 logs.go:282] 0 containers: []
	W0510 19:29:56.686672  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:56.686680  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:56.686750  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:56.726136  459056 cri.go:89] found id: ""
	I0510 19:29:56.726165  459056 logs.go:282] 0 containers: []
	W0510 19:29:56.726180  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:56.726193  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:56.726209  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:56.777146  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:56.777195  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:56.793496  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:56.793530  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:56.866401  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:56.866436  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:56.866452  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:56.944116  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:56.944168  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:59.488989  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:59.506161  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:59.506233  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:59.542854  459056 cri.go:89] found id: ""
	I0510 19:29:59.542891  459056 logs.go:282] 0 containers: []
	W0510 19:29:59.542900  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:59.542907  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:59.542961  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:59.580216  459056 cri.go:89] found id: ""
	I0510 19:29:59.580257  459056 logs.go:282] 0 containers: []
	W0510 19:29:59.580268  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:59.580276  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:59.580348  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:59.623729  459056 cri.go:89] found id: ""
	I0510 19:29:59.623770  459056 logs.go:282] 0 containers: []
	W0510 19:29:59.623781  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:59.623790  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:59.623854  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:59.662414  459056 cri.go:89] found id: ""
	I0510 19:29:59.662447  459056 logs.go:282] 0 containers: []
	W0510 19:29:59.662455  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:59.662462  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:59.662531  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:59.700471  459056 cri.go:89] found id: ""
	I0510 19:29:59.700505  459056 logs.go:282] 0 containers: []
	W0510 19:29:59.700514  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:59.700520  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:59.700593  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:59.740841  459056 cri.go:89] found id: ""
	I0510 19:29:59.740876  459056 logs.go:282] 0 containers: []
	W0510 19:29:59.740884  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:59.740891  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:59.740944  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:59.782895  459056 cri.go:89] found id: ""
	I0510 19:29:59.782937  459056 logs.go:282] 0 containers: []
	W0510 19:29:59.782946  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:59.782952  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:59.783021  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:59.820556  459056 cri.go:89] found id: ""
	I0510 19:29:59.820591  459056 logs.go:282] 0 containers: []
	W0510 19:29:59.820603  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:59.820615  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:59.820632  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:59.835555  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:59.835591  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:59.907710  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:59.907742  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:59.907758  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:59.983847  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:59.983895  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:00.030738  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:00.030782  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:02.583146  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:02.601217  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:02.601290  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:02.638485  459056 cri.go:89] found id: ""
	I0510 19:30:02.638523  459056 logs.go:282] 0 containers: []
	W0510 19:30:02.638536  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:02.638544  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:02.638625  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:02.676096  459056 cri.go:89] found id: ""
	I0510 19:30:02.676124  459056 logs.go:282] 0 containers: []
	W0510 19:30:02.676132  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:02.676138  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:02.676198  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:02.712753  459056 cri.go:89] found id: ""
	I0510 19:30:02.712794  459056 logs.go:282] 0 containers: []
	W0510 19:30:02.712806  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:02.712814  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:02.712889  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:02.750540  459056 cri.go:89] found id: ""
	I0510 19:30:02.750572  459056 logs.go:282] 0 containers: []
	W0510 19:30:02.750580  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:02.750588  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:02.750666  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:02.789337  459056 cri.go:89] found id: ""
	I0510 19:30:02.789372  459056 logs.go:282] 0 containers: []
	W0510 19:30:02.789386  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:02.789394  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:02.789471  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:02.827044  459056 cri.go:89] found id: ""
	I0510 19:30:02.827076  459056 logs.go:282] 0 containers: []
	W0510 19:30:02.827087  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:02.827094  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:02.827154  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:02.867202  459056 cri.go:89] found id: ""
	I0510 19:30:02.867251  459056 logs.go:282] 0 containers: []
	W0510 19:30:02.867264  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:02.867272  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:02.867336  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:02.906104  459056 cri.go:89] found id: ""
	I0510 19:30:02.906136  459056 logs.go:282] 0 containers: []
	W0510 19:30:02.906145  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:02.906155  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:02.906167  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:02.959451  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:02.959504  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:02.975037  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:02.975074  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:03.051037  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:03.051066  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:03.051083  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:03.132615  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:03.132663  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:05.677564  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:05.695683  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:05.695774  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:05.733222  459056 cri.go:89] found id: ""
	I0510 19:30:05.733253  459056 logs.go:282] 0 containers: []
	W0510 19:30:05.733266  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:05.733273  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:05.733343  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:05.775893  459056 cri.go:89] found id: ""
	I0510 19:30:05.775926  459056 logs.go:282] 0 containers: []
	W0510 19:30:05.775938  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:05.775946  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:05.776013  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:05.814170  459056 cri.go:89] found id: ""
	I0510 19:30:05.814201  459056 logs.go:282] 0 containers: []
	W0510 19:30:05.814209  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:05.814215  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:05.814271  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:05.865156  459056 cri.go:89] found id: ""
	I0510 19:30:05.865185  459056 logs.go:282] 0 containers: []
	W0510 19:30:05.865193  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:05.865200  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:05.865267  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:05.904409  459056 cri.go:89] found id: ""
	I0510 19:30:05.904440  459056 logs.go:282] 0 containers: []
	W0510 19:30:05.904449  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:05.904455  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:05.904516  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:05.948278  459056 cri.go:89] found id: ""
	I0510 19:30:05.948308  459056 logs.go:282] 0 containers: []
	W0510 19:30:05.948316  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:05.948322  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:05.948383  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:05.986379  459056 cri.go:89] found id: ""
	I0510 19:30:05.986415  459056 logs.go:282] 0 containers: []
	W0510 19:30:05.986426  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:05.986435  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:05.986502  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:06.030940  459056 cri.go:89] found id: ""
	I0510 19:30:06.030974  459056 logs.go:282] 0 containers: []
	W0510 19:30:06.030984  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:06.030994  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:06.031007  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:06.081923  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:06.081973  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:06.097288  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:06.097321  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:06.169428  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:06.169457  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:06.169471  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:06.247404  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:06.247457  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:08.791138  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:08.810447  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:08.810527  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:08.849947  459056 cri.go:89] found id: ""
	I0510 19:30:08.849983  459056 logs.go:282] 0 containers: []
	W0510 19:30:08.849996  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:08.850005  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:08.850079  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:08.889474  459056 cri.go:89] found id: ""
	I0510 19:30:08.889511  459056 logs.go:282] 0 containers: []
	W0510 19:30:08.889521  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:08.889530  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:08.889605  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:08.929364  459056 cri.go:89] found id: ""
	I0510 19:30:08.929402  459056 logs.go:282] 0 containers: []
	W0510 19:30:08.929414  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:08.929420  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:08.929481  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:08.970260  459056 cri.go:89] found id: ""
	I0510 19:30:08.970292  459056 logs.go:282] 0 containers: []
	W0510 19:30:08.970301  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:08.970312  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:08.970370  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:09.011080  459056 cri.go:89] found id: ""
	I0510 19:30:09.011114  459056 logs.go:282] 0 containers: []
	W0510 19:30:09.011123  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:09.011130  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:09.011192  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:09.050057  459056 cri.go:89] found id: ""
	I0510 19:30:09.050096  459056 logs.go:282] 0 containers: []
	W0510 19:30:09.050106  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:09.050112  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:09.050177  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:09.089408  459056 cri.go:89] found id: ""
	I0510 19:30:09.089454  459056 logs.go:282] 0 containers: []
	W0510 19:30:09.089467  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:09.089484  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:09.089559  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:09.127502  459056 cri.go:89] found id: ""
	I0510 19:30:09.127533  459056 logs.go:282] 0 containers: []
	W0510 19:30:09.127544  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:09.127555  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:09.127573  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:09.177856  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:09.177903  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:09.194009  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:09.194041  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:09.269803  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:09.269833  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:09.269851  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:09.350498  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:09.350562  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:11.895252  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:11.913748  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:11.913819  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:11.957943  459056 cri.go:89] found id: ""
	I0510 19:30:11.957974  459056 logs.go:282] 0 containers: []
	W0510 19:30:11.957982  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:11.957990  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:11.958059  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:11.999707  459056 cri.go:89] found id: ""
	I0510 19:30:11.999735  459056 logs.go:282] 0 containers: []
	W0510 19:30:11.999743  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:11.999750  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:11.999805  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:12.044866  459056 cri.go:89] found id: ""
	I0510 19:30:12.044905  459056 logs.go:282] 0 containers: []
	W0510 19:30:12.044914  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:12.044922  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:12.044980  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:12.083885  459056 cri.go:89] found id: ""
	I0510 19:30:12.083925  459056 logs.go:282] 0 containers: []
	W0510 19:30:12.083938  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:12.083946  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:12.084014  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:12.124186  459056 cri.go:89] found id: ""
	I0510 19:30:12.124223  459056 logs.go:282] 0 containers: []
	W0510 19:30:12.124232  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:12.124239  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:12.124296  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:12.163773  459056 cri.go:89] found id: ""
	I0510 19:30:12.163809  459056 logs.go:282] 0 containers: []
	W0510 19:30:12.163817  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:12.163824  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:12.163887  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:12.208245  459056 cri.go:89] found id: ""
	I0510 19:30:12.208285  459056 logs.go:282] 0 containers: []
	W0510 19:30:12.208297  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:12.208305  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:12.208378  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:12.248816  459056 cri.go:89] found id: ""
	I0510 19:30:12.248855  459056 logs.go:282] 0 containers: []
	W0510 19:30:12.248871  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:12.248885  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:12.248907  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:12.293098  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:12.293137  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:12.346119  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:12.346166  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:12.362174  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:12.362208  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:12.436485  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:12.436514  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:12.436527  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:15.021483  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:15.039908  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:15.039983  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:15.077291  459056 cri.go:89] found id: ""
	I0510 19:30:15.077323  459056 logs.go:282] 0 containers: []
	W0510 19:30:15.077335  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:15.077344  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:15.077417  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:15.119066  459056 cri.go:89] found id: ""
	I0510 19:30:15.119099  459056 logs.go:282] 0 containers: []
	W0510 19:30:15.119108  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:15.119114  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:15.119169  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:15.158927  459056 cri.go:89] found id: ""
	I0510 19:30:15.158957  459056 logs.go:282] 0 containers: []
	W0510 19:30:15.158968  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:15.158976  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:15.159052  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:15.199423  459056 cri.go:89] found id: ""
	I0510 19:30:15.199458  459056 logs.go:282] 0 containers: []
	W0510 19:30:15.199467  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:15.199474  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:15.199538  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:15.237695  459056 cri.go:89] found id: ""
	I0510 19:30:15.237734  459056 logs.go:282] 0 containers: []
	W0510 19:30:15.237744  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:15.237751  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:15.237822  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:15.280652  459056 cri.go:89] found id: ""
	I0510 19:30:15.280693  459056 logs.go:282] 0 containers: []
	W0510 19:30:15.280705  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:15.280721  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:15.280794  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:15.319730  459056 cri.go:89] found id: ""
	I0510 19:30:15.319767  459056 logs.go:282] 0 containers: []
	W0510 19:30:15.319780  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:15.319788  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:15.319861  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:15.361113  459056 cri.go:89] found id: ""
	I0510 19:30:15.361147  459056 logs.go:282] 0 containers: []
	W0510 19:30:15.361156  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:15.361165  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:15.361178  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:15.424953  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:15.425003  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:15.444155  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:15.444187  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:15.520040  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:15.520067  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:15.520080  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:15.595963  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:15.596013  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:18.142672  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:18.160293  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:18.160373  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:18.197867  459056 cri.go:89] found id: ""
	I0510 19:30:18.197911  459056 logs.go:282] 0 containers: []
	W0510 19:30:18.197920  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:18.197927  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:18.197985  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:18.236491  459056 cri.go:89] found id: ""
	I0510 19:30:18.236519  459056 logs.go:282] 0 containers: []
	W0510 19:30:18.236528  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:18.236535  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:18.236591  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:18.275316  459056 cri.go:89] found id: ""
	I0510 19:30:18.275355  459056 logs.go:282] 0 containers: []
	W0510 19:30:18.275368  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:18.275376  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:18.275447  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:18.314904  459056 cri.go:89] found id: ""
	I0510 19:30:18.314946  459056 logs.go:282] 0 containers: []
	W0510 19:30:18.314963  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:18.314972  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:18.315049  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:18.353877  459056 cri.go:89] found id: ""
	I0510 19:30:18.353906  459056 logs.go:282] 0 containers: []
	W0510 19:30:18.353924  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:18.353933  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:18.354019  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:18.391081  459056 cri.go:89] found id: ""
	I0510 19:30:18.391115  459056 logs.go:282] 0 containers: []
	W0510 19:30:18.391124  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:18.391131  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:18.391208  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:18.430112  459056 cri.go:89] found id: ""
	I0510 19:30:18.430151  459056 logs.go:282] 0 containers: []
	W0510 19:30:18.430165  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:18.430171  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:18.430241  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:18.467247  459056 cri.go:89] found id: ""
	I0510 19:30:18.467282  459056 logs.go:282] 0 containers: []
	W0510 19:30:18.467294  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:18.467307  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:18.467331  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:18.483013  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:18.483049  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:18.556404  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:18.556437  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:18.556457  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:18.634193  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:18.634242  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:18.677713  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:18.677752  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:21.230499  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:21.248397  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:21.248485  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:21.284922  459056 cri.go:89] found id: ""
	I0510 19:30:21.284961  459056 logs.go:282] 0 containers: []
	W0510 19:30:21.284974  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:21.284983  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:21.285062  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:21.323019  459056 cri.go:89] found id: ""
	I0510 19:30:21.323054  459056 logs.go:282] 0 containers: []
	W0510 19:30:21.323064  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:21.323071  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:21.323148  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:21.361809  459056 cri.go:89] found id: ""
	I0510 19:30:21.361838  459056 logs.go:282] 0 containers: []
	W0510 19:30:21.361846  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:21.361852  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:21.361930  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:21.399062  459056 cri.go:89] found id: ""
	I0510 19:30:21.399101  459056 logs.go:282] 0 containers: []
	W0510 19:30:21.399115  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:21.399124  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:21.399195  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:21.436027  459056 cri.go:89] found id: ""
	I0510 19:30:21.436061  459056 logs.go:282] 0 containers: []
	W0510 19:30:21.436071  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:21.436077  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:21.436143  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:21.481101  459056 cri.go:89] found id: ""
	I0510 19:30:21.481141  459056 logs.go:282] 0 containers: []
	W0510 19:30:21.481151  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:21.481158  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:21.481213  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:21.525918  459056 cri.go:89] found id: ""
	I0510 19:30:21.525949  459056 logs.go:282] 0 containers: []
	W0510 19:30:21.525958  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:21.525965  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:21.526051  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:21.566402  459056 cri.go:89] found id: ""
	I0510 19:30:21.566438  459056 logs.go:282] 0 containers: []
	W0510 19:30:21.566451  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:21.566466  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:21.566483  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:21.640295  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:21.640326  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:21.640344  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:21.723808  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:21.723860  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:21.787009  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:21.787053  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:21.846605  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:21.846653  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:24.365273  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:24.382257  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:24.382346  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:24.422109  459056 cri.go:89] found id: ""
	I0510 19:30:24.422145  459056 logs.go:282] 0 containers: []
	W0510 19:30:24.422154  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:24.422161  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:24.422223  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:24.461355  459056 cri.go:89] found id: ""
	I0510 19:30:24.461382  459056 logs.go:282] 0 containers: []
	W0510 19:30:24.461389  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:24.461395  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:24.461451  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:24.500168  459056 cri.go:89] found id: ""
	I0510 19:30:24.500203  459056 logs.go:282] 0 containers: []
	W0510 19:30:24.500214  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:24.500222  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:24.500293  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:24.535437  459056 cri.go:89] found id: ""
	I0510 19:30:24.535473  459056 logs.go:282] 0 containers: []
	W0510 19:30:24.535481  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:24.535487  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:24.535567  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:24.574226  459056 cri.go:89] found id: ""
	I0510 19:30:24.574262  459056 logs.go:282] 0 containers: []
	W0510 19:30:24.574274  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:24.574282  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:24.574353  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:24.611038  459056 cri.go:89] found id: ""
	I0510 19:30:24.611076  459056 logs.go:282] 0 containers: []
	W0510 19:30:24.611085  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:24.611094  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:24.611148  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:24.650255  459056 cri.go:89] found id: ""
	I0510 19:30:24.650291  459056 logs.go:282] 0 containers: []
	W0510 19:30:24.650303  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:24.650313  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:24.650382  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:24.688115  459056 cri.go:89] found id: ""
	I0510 19:30:24.688148  459056 logs.go:282] 0 containers: []
	W0510 19:30:24.688157  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:24.688166  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:24.688180  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:24.738142  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:24.738193  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:24.754027  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:24.754059  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:24.836221  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:24.836251  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:24.836270  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:24.911260  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:24.911306  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:27.453339  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:27.470837  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:27.470922  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:27.510141  459056 cri.go:89] found id: ""
	I0510 19:30:27.510171  459056 logs.go:282] 0 containers: []
	W0510 19:30:27.510180  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:27.510187  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:27.510245  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:27.560311  459056 cri.go:89] found id: ""
	I0510 19:30:27.560337  459056 logs.go:282] 0 containers: []
	W0510 19:30:27.560346  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:27.560352  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:27.560412  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:27.615618  459056 cri.go:89] found id: ""
	I0510 19:30:27.615648  459056 logs.go:282] 0 containers: []
	W0510 19:30:27.615658  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:27.615683  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:27.615745  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:27.663257  459056 cri.go:89] found id: ""
	I0510 19:30:27.663290  459056 logs.go:282] 0 containers: []
	W0510 19:30:27.663298  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:27.663305  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:27.663377  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:27.705815  459056 cri.go:89] found id: ""
	I0510 19:30:27.705856  459056 logs.go:282] 0 containers: []
	W0510 19:30:27.705864  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:27.705870  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:27.705932  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:27.744580  459056 cri.go:89] found id: ""
	I0510 19:30:27.744612  459056 logs.go:282] 0 containers: []
	W0510 19:30:27.744620  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:27.744637  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:27.744694  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:27.781041  459056 cri.go:89] found id: ""
	I0510 19:30:27.781070  459056 logs.go:282] 0 containers: []
	W0510 19:30:27.781078  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:27.781087  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:27.781145  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:27.818543  459056 cri.go:89] found id: ""
	I0510 19:30:27.818583  459056 logs.go:282] 0 containers: []
	W0510 19:30:27.818592  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:27.818603  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:27.818631  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:27.834004  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:27.834038  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:27.907944  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:27.907973  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:27.907991  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:27.988229  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:27.988276  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:28.032107  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:28.032141  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:30.581752  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:30.599095  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:30.599167  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:30.637772  459056 cri.go:89] found id: ""
	I0510 19:30:30.637804  459056 logs.go:282] 0 containers: []
	W0510 19:30:30.637815  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:30.637824  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:30.637894  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:30.674650  459056 cri.go:89] found id: ""
	I0510 19:30:30.674690  459056 logs.go:282] 0 containers: []
	W0510 19:30:30.674702  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:30.674709  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:30.674791  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:30.712335  459056 cri.go:89] found id: ""
	I0510 19:30:30.712370  459056 logs.go:282] 0 containers: []
	W0510 19:30:30.712379  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:30.712384  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:30.712457  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:30.749850  459056 cri.go:89] found id: ""
	I0510 19:30:30.749894  459056 logs.go:282] 0 containers: []
	W0510 19:30:30.749906  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:30.749914  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:30.750001  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:30.790937  459056 cri.go:89] found id: ""
	I0510 19:30:30.790976  459056 logs.go:282] 0 containers: []
	W0510 19:30:30.790985  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:30.790992  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:30.791048  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:30.830223  459056 cri.go:89] found id: ""
	I0510 19:30:30.830256  459056 logs.go:282] 0 containers: []
	W0510 19:30:30.830265  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:30.830271  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:30.830335  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:30.868658  459056 cri.go:89] found id: ""
	I0510 19:30:30.868685  459056 logs.go:282] 0 containers: []
	W0510 19:30:30.868693  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:30.868699  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:30.868755  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:30.908485  459056 cri.go:89] found id: ""
	I0510 19:30:30.908518  459056 logs.go:282] 0 containers: []
	W0510 19:30:30.908527  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:30.908537  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:30.908576  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:30.987890  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:30.987915  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:30.987930  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:31.066668  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:31.066724  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:31.114289  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:31.114322  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:31.168049  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:31.168101  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:33.685815  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:33.702996  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:33.703075  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:33.740679  459056 cri.go:89] found id: ""
	I0510 19:30:33.740710  459056 logs.go:282] 0 containers: []
	W0510 19:30:33.740718  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:33.740724  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:33.740789  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:33.778013  459056 cri.go:89] found id: ""
	I0510 19:30:33.778045  459056 logs.go:282] 0 containers: []
	W0510 19:30:33.778053  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:33.778059  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:33.778118  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:33.819601  459056 cri.go:89] found id: ""
	I0510 19:30:33.819634  459056 logs.go:282] 0 containers: []
	W0510 19:30:33.819643  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:33.819649  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:33.819719  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:33.858368  459056 cri.go:89] found id: ""
	I0510 19:30:33.858399  459056 logs.go:282] 0 containers: []
	W0510 19:30:33.858407  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:33.858414  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:33.858469  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:33.899175  459056 cri.go:89] found id: ""
	I0510 19:30:33.899210  459056 logs.go:282] 0 containers: []
	W0510 19:30:33.899219  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:33.899225  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:33.899297  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:33.938037  459056 cri.go:89] found id: ""
	I0510 19:30:33.938075  459056 logs.go:282] 0 containers: []
	W0510 19:30:33.938085  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:33.938092  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:33.938151  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:33.976364  459056 cri.go:89] found id: ""
	I0510 19:30:33.976398  459056 logs.go:282] 0 containers: []
	W0510 19:30:33.976408  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:33.976415  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:33.976474  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:34.019444  459056 cri.go:89] found id: ""
	I0510 19:30:34.019476  459056 logs.go:282] 0 containers: []
	W0510 19:30:34.019485  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:34.019496  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:34.019509  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:34.066863  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:34.066897  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:34.116346  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:34.116394  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:34.131809  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:34.131842  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:34.201228  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:34.201261  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:34.201278  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:36.784883  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:36.802185  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:36.802277  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:36.838342  459056 cri.go:89] found id: ""
	I0510 19:30:36.838382  459056 logs.go:282] 0 containers: []
	W0510 19:30:36.838395  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:36.838405  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:36.838484  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:36.875021  459056 cri.go:89] found id: ""
	I0510 19:30:36.875052  459056 logs.go:282] 0 containers: []
	W0510 19:30:36.875060  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:36.875066  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:36.875136  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:36.912550  459056 cri.go:89] found id: ""
	I0510 19:30:36.912579  459056 logs.go:282] 0 containers: []
	W0510 19:30:36.912589  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:36.912595  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:36.912672  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:36.953970  459056 cri.go:89] found id: ""
	I0510 19:30:36.954002  459056 logs.go:282] 0 containers: []
	W0510 19:30:36.954013  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:36.954021  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:36.954090  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:36.990198  459056 cri.go:89] found id: ""
	I0510 19:30:36.990227  459056 logs.go:282] 0 containers: []
	W0510 19:30:36.990236  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:36.990242  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:36.990315  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:37.026559  459056 cri.go:89] found id: ""
	I0510 19:30:37.026594  459056 logs.go:282] 0 containers: []
	W0510 19:30:37.026604  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:37.026612  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:37.026696  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:37.063080  459056 cri.go:89] found id: ""
	I0510 19:30:37.063112  459056 logs.go:282] 0 containers: []
	W0510 19:30:37.063120  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:37.063127  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:37.063181  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:37.099746  459056 cri.go:89] found id: ""
	I0510 19:30:37.099786  459056 logs.go:282] 0 containers: []
	W0510 19:30:37.099800  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:37.099814  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:37.099831  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:37.150884  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:37.150932  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:37.166536  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:37.166568  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:37.241013  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:37.241045  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:37.241062  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:37.319328  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:37.319370  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:39.863629  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:39.881255  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:39.881331  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:39.921099  459056 cri.go:89] found id: ""
	I0510 19:30:39.921128  459056 logs.go:282] 0 containers: []
	W0510 19:30:39.921136  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:39.921142  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:39.921208  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:39.958588  459056 cri.go:89] found id: ""
	I0510 19:30:39.958620  459056 logs.go:282] 0 containers: []
	W0510 19:30:39.958629  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:39.958634  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:39.958701  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:39.995129  459056 cri.go:89] found id: ""
	I0510 19:30:39.995160  459056 logs.go:282] 0 containers: []
	W0510 19:30:39.995168  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:39.995174  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:39.995230  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:40.031278  459056 cri.go:89] found id: ""
	I0510 19:30:40.031308  459056 logs.go:282] 0 containers: []
	W0510 19:30:40.031320  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:40.031328  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:40.031399  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:40.069662  459056 cri.go:89] found id: ""
	I0510 19:30:40.069694  459056 logs.go:282] 0 containers: []
	W0510 19:30:40.069703  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:40.069708  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:40.069769  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:40.106418  459056 cri.go:89] found id: ""
	I0510 19:30:40.106452  459056 logs.go:282] 0 containers: []
	W0510 19:30:40.106464  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:40.106474  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:40.106546  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:40.143694  459056 cri.go:89] found id: ""
	I0510 19:30:40.143728  459056 logs.go:282] 0 containers: []
	W0510 19:30:40.143737  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:40.143743  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:40.143812  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:40.178265  459056 cri.go:89] found id: ""
	I0510 19:30:40.178296  459056 logs.go:282] 0 containers: []
	W0510 19:30:40.178304  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:40.178314  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:40.178328  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:40.247907  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:40.247940  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:40.247959  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:40.321933  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:40.321985  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:40.368947  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:40.368991  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:40.419749  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:40.419791  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:42.936834  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:42.954258  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:42.954332  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:42.991570  459056 cri.go:89] found id: ""
	I0510 19:30:42.991603  459056 logs.go:282] 0 containers: []
	W0510 19:30:42.991611  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:42.991617  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:42.991685  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:43.029718  459056 cri.go:89] found id: ""
	I0510 19:30:43.029751  459056 logs.go:282] 0 containers: []
	W0510 19:30:43.029759  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:43.029766  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:43.029824  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:43.068297  459056 cri.go:89] found id: ""
	I0510 19:30:43.068328  459056 logs.go:282] 0 containers: []
	W0510 19:30:43.068335  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:43.068342  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:43.068405  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:43.109805  459056 cri.go:89] found id: ""
	I0510 19:30:43.109833  459056 logs.go:282] 0 containers: []
	W0510 19:30:43.109841  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:43.109847  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:43.109900  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:43.148109  459056 cri.go:89] found id: ""
	I0510 19:30:43.148141  459056 logs.go:282] 0 containers: []
	W0510 19:30:43.148149  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:43.148156  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:43.148224  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:43.185187  459056 cri.go:89] found id: ""
	I0510 19:30:43.185221  459056 logs.go:282] 0 containers: []
	W0510 19:30:43.185230  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:43.185239  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:43.185293  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:43.224447  459056 cri.go:89] found id: ""
	I0510 19:30:43.224476  459056 logs.go:282] 0 containers: []
	W0510 19:30:43.224485  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:43.224496  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:43.224552  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:43.268442  459056 cri.go:89] found id: ""
	I0510 19:30:43.268471  459056 logs.go:282] 0 containers: []
	W0510 19:30:43.268480  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:43.268489  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:43.268501  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:43.347249  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:43.347282  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:43.347307  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:43.427928  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:43.427975  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:43.473221  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:43.473258  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:43.522748  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:43.522796  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:46.040289  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:46.058969  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:46.059051  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:46.102709  459056 cri.go:89] found id: ""
	I0510 19:30:46.102757  459056 logs.go:282] 0 containers: []
	W0510 19:30:46.102775  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:46.102786  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:46.102848  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:46.146551  459056 cri.go:89] found id: ""
	I0510 19:30:46.146584  459056 logs.go:282] 0 containers: []
	W0510 19:30:46.146593  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:46.146599  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:46.146670  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:46.187445  459056 cri.go:89] found id: ""
	I0510 19:30:46.187484  459056 logs.go:282] 0 containers: []
	W0510 19:30:46.187498  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:46.187505  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:46.187575  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:46.224647  459056 cri.go:89] found id: ""
	I0510 19:30:46.224686  459056 logs.go:282] 0 containers: []
	W0510 19:30:46.224697  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:46.224706  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:46.224786  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:46.263513  459056 cri.go:89] found id: ""
	I0510 19:30:46.263545  459056 logs.go:282] 0 containers: []
	W0510 19:30:46.263554  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:46.263560  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:46.263639  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:46.300049  459056 cri.go:89] found id: ""
	I0510 19:30:46.300085  459056 logs.go:282] 0 containers: []
	W0510 19:30:46.300096  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:46.300104  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:46.300174  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:46.337107  459056 cri.go:89] found id: ""
	I0510 19:30:46.337139  459056 logs.go:282] 0 containers: []
	W0510 19:30:46.337150  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:46.337159  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:46.337219  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:46.373699  459056 cri.go:89] found id: ""
	I0510 19:30:46.373736  459056 logs.go:282] 0 containers: []
	W0510 19:30:46.373748  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:46.373761  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:46.373777  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:46.425713  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:46.425764  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:46.441565  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:46.441602  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:46.517861  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:46.517897  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:46.517918  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:46.601755  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:46.601807  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:49.147704  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:49.165325  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:49.165397  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:49.206272  459056 cri.go:89] found id: ""
	I0510 19:30:49.206309  459056 logs.go:282] 0 containers: []
	W0510 19:30:49.206318  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:49.206324  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:49.206385  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:49.241832  459056 cri.go:89] found id: ""
	I0510 19:30:49.241863  459056 logs.go:282] 0 containers: []
	W0510 19:30:49.241871  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:49.241878  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:49.241958  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:49.280474  459056 cri.go:89] found id: ""
	I0510 19:30:49.280505  459056 logs.go:282] 0 containers: []
	W0510 19:30:49.280514  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:49.280520  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:49.280577  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:49.317656  459056 cri.go:89] found id: ""
	I0510 19:30:49.317687  459056 logs.go:282] 0 containers: []
	W0510 19:30:49.317699  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:49.317718  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:49.317789  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:49.356059  459056 cri.go:89] found id: ""
	I0510 19:30:49.356094  459056 logs.go:282] 0 containers: []
	W0510 19:30:49.356102  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:49.356112  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:49.356169  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:49.396831  459056 cri.go:89] found id: ""
	I0510 19:30:49.396864  459056 logs.go:282] 0 containers: []
	W0510 19:30:49.396877  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:49.396885  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:49.396954  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:49.433301  459056 cri.go:89] found id: ""
	I0510 19:30:49.433328  459056 logs.go:282] 0 containers: []
	W0510 19:30:49.433336  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:49.433342  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:49.433416  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:49.470642  459056 cri.go:89] found id: ""
	I0510 19:30:49.470674  459056 logs.go:282] 0 containers: []
	W0510 19:30:49.470686  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:49.470698  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:49.470715  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:49.520867  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:49.520910  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:49.536370  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:49.536406  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:49.608860  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:49.608894  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:49.608913  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:49.687344  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:49.687395  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:52.231133  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:52.248456  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:52.248550  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:52.288902  459056 cri.go:89] found id: ""
	I0510 19:30:52.288960  459056 logs.go:282] 0 containers: []
	W0510 19:30:52.288973  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:52.288982  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:52.289062  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:52.326578  459056 cri.go:89] found id: ""
	I0510 19:30:52.326611  459056 logs.go:282] 0 containers: []
	W0510 19:30:52.326626  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:52.326634  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:52.326713  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:52.368627  459056 cri.go:89] found id: ""
	I0510 19:30:52.368657  459056 logs.go:282] 0 containers: []
	W0510 19:30:52.368666  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:52.368672  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:52.368754  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:52.406483  459056 cri.go:89] found id: ""
	I0510 19:30:52.406518  459056 logs.go:282] 0 containers: []
	W0510 19:30:52.406526  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:52.406533  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:52.406599  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:52.445770  459056 cri.go:89] found id: ""
	I0510 19:30:52.445805  459056 logs.go:282] 0 containers: []
	W0510 19:30:52.445816  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:52.445826  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:52.445898  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:52.484279  459056 cri.go:89] found id: ""
	I0510 19:30:52.484315  459056 logs.go:282] 0 containers: []
	W0510 19:30:52.484325  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:52.484332  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:52.484395  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:52.523564  459056 cri.go:89] found id: ""
	I0510 19:30:52.523601  459056 logs.go:282] 0 containers: []
	W0510 19:30:52.523628  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:52.523634  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:52.523701  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:52.566712  459056 cri.go:89] found id: ""
	I0510 19:30:52.566747  459056 logs.go:282] 0 containers: []
	W0510 19:30:52.566756  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:52.566768  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:52.566784  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:52.618210  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:52.618263  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:52.635481  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:52.635518  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:52.710370  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:52.710415  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:52.710435  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:52.789902  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:52.789960  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:55.334697  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:55.351738  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:55.351815  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:55.387464  459056 cri.go:89] found id: ""
	I0510 19:30:55.387493  459056 logs.go:282] 0 containers: []
	W0510 19:30:55.387503  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:55.387512  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:55.387578  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:55.424565  459056 cri.go:89] found id: ""
	I0510 19:30:55.424597  459056 logs.go:282] 0 containers: []
	W0510 19:30:55.424608  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:55.424617  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:55.424690  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:55.461558  459056 cri.go:89] found id: ""
	I0510 19:30:55.461597  459056 logs.go:282] 0 containers: []
	W0510 19:30:55.461608  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:55.461616  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:55.461689  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:55.500713  459056 cri.go:89] found id: ""
	I0510 19:30:55.500742  459056 logs.go:282] 0 containers: []
	W0510 19:30:55.500756  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:55.500763  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:55.500826  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:55.536104  459056 cri.go:89] found id: ""
	I0510 19:30:55.536132  459056 logs.go:282] 0 containers: []
	W0510 19:30:55.536141  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:55.536147  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:55.536206  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:55.571895  459056 cri.go:89] found id: ""
	I0510 19:30:55.571924  459056 logs.go:282] 0 containers: []
	W0510 19:30:55.571932  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:55.571938  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:55.571996  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:55.610794  459056 cri.go:89] found id: ""
	I0510 19:30:55.610822  459056 logs.go:282] 0 containers: []
	W0510 19:30:55.610831  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:55.610837  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:55.610904  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:55.647514  459056 cri.go:89] found id: ""
	I0510 19:30:55.647544  459056 logs.go:282] 0 containers: []
	W0510 19:30:55.647554  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:55.647563  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:55.647578  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:55.697745  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:55.697788  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:55.714126  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:55.714161  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:55.786711  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:55.786735  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:55.786749  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:55.863002  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:55.863049  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:58.428393  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:58.446138  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:58.446216  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:58.482821  459056 cri.go:89] found id: ""
	I0510 19:30:58.482856  459056 logs.go:282] 0 containers: []
	W0510 19:30:58.482872  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:58.482880  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:58.482939  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:58.524325  459056 cri.go:89] found id: ""
	I0510 19:30:58.524358  459056 logs.go:282] 0 containers: []
	W0510 19:30:58.524369  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:58.524377  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:58.524433  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:58.564327  459056 cri.go:89] found id: ""
	I0510 19:30:58.564366  459056 logs.go:282] 0 containers: []
	W0510 19:30:58.564377  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:58.564383  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:58.564439  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:58.602937  459056 cri.go:89] found id: ""
	I0510 19:30:58.602966  459056 logs.go:282] 0 containers: []
	W0510 19:30:58.602974  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:58.602981  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:58.603038  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:58.639820  459056 cri.go:89] found id: ""
	I0510 19:30:58.639852  459056 logs.go:282] 0 containers: []
	W0510 19:30:58.639863  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:58.639871  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:58.639963  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:58.676466  459056 cri.go:89] found id: ""
	I0510 19:30:58.676503  459056 logs.go:282] 0 containers: []
	W0510 19:30:58.676515  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:58.676524  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:58.676593  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:58.712669  459056 cri.go:89] found id: ""
	I0510 19:30:58.712706  459056 logs.go:282] 0 containers: []
	W0510 19:30:58.712715  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:58.712721  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:58.712797  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:58.748436  459056 cri.go:89] found id: ""
	I0510 19:30:58.748474  459056 logs.go:282] 0 containers: []
	W0510 19:30:58.748485  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:58.748496  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:58.748513  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:58.801263  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:58.801311  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:58.816908  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:58.816945  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:58.890881  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:58.890912  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:58.890932  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:58.969061  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:58.969113  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:01.513933  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:01.531492  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:01.531565  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:01.568296  459056 cri.go:89] found id: ""
	I0510 19:31:01.568324  459056 logs.go:282] 0 containers: []
	W0510 19:31:01.568333  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:01.568340  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:01.568396  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:01.610372  459056 cri.go:89] found id: ""
	I0510 19:31:01.610406  459056 logs.go:282] 0 containers: []
	W0510 19:31:01.610415  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:01.610421  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:01.610485  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:01.648652  459056 cri.go:89] found id: ""
	I0510 19:31:01.648682  459056 logs.go:282] 0 containers: []
	W0510 19:31:01.648690  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:01.648696  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:01.648751  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:01.686551  459056 cri.go:89] found id: ""
	I0510 19:31:01.686583  459056 logs.go:282] 0 containers: []
	W0510 19:31:01.686595  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:01.686604  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:01.686694  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:01.724202  459056 cri.go:89] found id: ""
	I0510 19:31:01.724243  459056 logs.go:282] 0 containers: []
	W0510 19:31:01.724255  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:01.724261  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:01.724337  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:01.763500  459056 cri.go:89] found id: ""
	I0510 19:31:01.763534  459056 logs.go:282] 0 containers: []
	W0510 19:31:01.763544  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:01.763550  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:01.763629  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:01.808280  459056 cri.go:89] found id: ""
	I0510 19:31:01.808312  459056 logs.go:282] 0 containers: []
	W0510 19:31:01.808324  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:01.808332  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:01.808403  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:01.843980  459056 cri.go:89] found id: ""
	I0510 19:31:01.844018  459056 logs.go:282] 0 containers: []
	W0510 19:31:01.844031  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:01.844044  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:01.844061  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:01.907482  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:01.907521  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:01.922645  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:01.922683  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:01.999977  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:02.000009  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:02.000031  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:02.078872  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:02.078920  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:04.624201  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:04.641739  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:04.641818  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:04.680796  459056 cri.go:89] found id: ""
	I0510 19:31:04.680825  459056 logs.go:282] 0 containers: []
	W0510 19:31:04.680833  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:04.680839  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:04.680893  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:04.718840  459056 cri.go:89] found id: ""
	I0510 19:31:04.718867  459056 logs.go:282] 0 containers: []
	W0510 19:31:04.718874  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:04.718880  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:04.718943  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:04.753687  459056 cri.go:89] found id: ""
	I0510 19:31:04.753726  459056 logs.go:282] 0 containers: []
	W0510 19:31:04.753737  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:04.753745  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:04.753815  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:04.790863  459056 cri.go:89] found id: ""
	I0510 19:31:04.790893  459056 logs.go:282] 0 containers: []
	W0510 19:31:04.790903  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:04.790910  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:04.790969  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:04.828293  459056 cri.go:89] found id: ""
	I0510 19:31:04.828321  459056 logs.go:282] 0 containers: []
	W0510 19:31:04.828329  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:04.828335  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:04.828400  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:04.865914  459056 cri.go:89] found id: ""
	I0510 19:31:04.865955  459056 logs.go:282] 0 containers: []
	W0510 19:31:04.865964  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:04.865970  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:04.866025  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:04.902834  459056 cri.go:89] found id: ""
	I0510 19:31:04.902866  459056 logs.go:282] 0 containers: []
	W0510 19:31:04.902879  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:04.902888  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:04.902960  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:04.939660  459056 cri.go:89] found id: ""
	I0510 19:31:04.939694  459056 logs.go:282] 0 containers: []
	W0510 19:31:04.939702  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:04.939711  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:04.939729  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:04.954569  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:04.954608  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:05.026998  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:05.027024  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:05.027041  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:05.111468  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:05.111520  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:05.155909  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:05.155953  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:07.709153  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:07.726572  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:07.726671  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:07.766663  459056 cri.go:89] found id: ""
	I0510 19:31:07.766691  459056 logs.go:282] 0 containers: []
	W0510 19:31:07.766703  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:07.766712  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:07.766909  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:07.806853  459056 cri.go:89] found id: ""
	I0510 19:31:07.806902  459056 logs.go:282] 0 containers: []
	W0510 19:31:07.806911  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:07.806917  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:07.806985  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:07.845188  459056 cri.go:89] found id: ""
	I0510 19:31:07.845218  459056 logs.go:282] 0 containers: []
	W0510 19:31:07.845227  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:07.845233  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:07.845291  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:07.884790  459056 cri.go:89] found id: ""
	I0510 19:31:07.884827  459056 logs.go:282] 0 containers: []
	W0510 19:31:07.884840  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:07.884847  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:07.884919  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:07.924161  459056 cri.go:89] found id: ""
	I0510 19:31:07.924195  459056 logs.go:282] 0 containers: []
	W0510 19:31:07.924206  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:07.924222  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:07.924288  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:07.962697  459056 cri.go:89] found id: ""
	I0510 19:31:07.962724  459056 logs.go:282] 0 containers: []
	W0510 19:31:07.962735  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:07.962744  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:07.962840  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:08.001266  459056 cri.go:89] found id: ""
	I0510 19:31:08.001306  459056 logs.go:282] 0 containers: []
	W0510 19:31:08.001318  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:08.001326  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:08.001418  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:08.040211  459056 cri.go:89] found id: ""
	I0510 19:31:08.040238  459056 logs.go:282] 0 containers: []
	W0510 19:31:08.040247  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:08.040255  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:08.040272  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:08.114738  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:08.114784  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:08.114802  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:08.188677  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:08.188725  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:08.232875  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:08.232908  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:08.293039  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:08.293095  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:10.811640  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:10.828942  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:10.829017  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:10.866960  459056 cri.go:89] found id: ""
	I0510 19:31:10.866993  459056 logs.go:282] 0 containers: []
	W0510 19:31:10.867003  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:10.867009  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:10.867066  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:10.906391  459056 cri.go:89] found id: ""
	I0510 19:31:10.906421  459056 logs.go:282] 0 containers: []
	W0510 19:31:10.906430  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:10.906436  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:10.906503  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:10.947062  459056 cri.go:89] found id: ""
	I0510 19:31:10.947091  459056 logs.go:282] 0 containers: []
	W0510 19:31:10.947100  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:10.947106  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:10.947172  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:10.984506  459056 cri.go:89] found id: ""
	I0510 19:31:10.984535  459056 logs.go:282] 0 containers: []
	W0510 19:31:10.984543  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:10.984549  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:10.984613  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:11.022676  459056 cri.go:89] found id: ""
	I0510 19:31:11.022715  459056 logs.go:282] 0 containers: []
	W0510 19:31:11.022724  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:11.022730  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:11.022805  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:11.067215  459056 cri.go:89] found id: ""
	I0510 19:31:11.067260  459056 logs.go:282] 0 containers: []
	W0510 19:31:11.067273  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:11.067282  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:11.067344  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:11.106883  459056 cri.go:89] found id: ""
	I0510 19:31:11.106912  459056 logs.go:282] 0 containers: []
	W0510 19:31:11.106920  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:11.106926  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:11.106984  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:11.148375  459056 cri.go:89] found id: ""
	I0510 19:31:11.148408  459056 logs.go:282] 0 containers: []
	W0510 19:31:11.148416  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:11.148426  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:11.148441  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:11.199507  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:11.199555  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:11.215477  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:11.215509  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:11.285250  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:11.285278  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:11.285292  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:11.365666  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:11.365724  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:13.914500  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:13.931769  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:13.931843  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:13.971450  459056 cri.go:89] found id: ""
	I0510 19:31:13.971481  459056 logs.go:282] 0 containers: []
	W0510 19:31:13.971491  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:13.971503  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:13.971585  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:14.016556  459056 cri.go:89] found id: ""
	I0510 19:31:14.016603  459056 logs.go:282] 0 containers: []
	W0510 19:31:14.016615  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:14.016624  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:14.016717  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:14.067360  459056 cri.go:89] found id: ""
	I0510 19:31:14.067395  459056 logs.go:282] 0 containers: []
	W0510 19:31:14.067406  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:14.067415  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:14.067490  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:14.115508  459056 cri.go:89] found id: ""
	I0510 19:31:14.115547  459056 logs.go:282] 0 containers: []
	W0510 19:31:14.115559  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:14.115566  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:14.115653  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:14.162589  459056 cri.go:89] found id: ""
	I0510 19:31:14.162620  459056 logs.go:282] 0 containers: []
	W0510 19:31:14.162629  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:14.162635  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:14.162720  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:14.203802  459056 cri.go:89] found id: ""
	I0510 19:31:14.203842  459056 logs.go:282] 0 containers: []
	W0510 19:31:14.203853  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:14.203861  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:14.203927  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:14.242404  459056 cri.go:89] found id: ""
	I0510 19:31:14.242440  459056 logs.go:282] 0 containers: []
	W0510 19:31:14.242449  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:14.242455  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:14.242526  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:14.279788  459056 cri.go:89] found id: ""
	I0510 19:31:14.279820  459056 logs.go:282] 0 containers: []
	W0510 19:31:14.279831  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:14.279843  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:14.279861  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:14.295706  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:14.295741  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:14.369637  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:14.369665  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:14.369684  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:14.445062  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:14.445113  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:14.488659  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:14.488692  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:17.042803  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:17.060263  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:17.060348  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:17.098561  459056 cri.go:89] found id: ""
	I0510 19:31:17.098588  459056 logs.go:282] 0 containers: []
	W0510 19:31:17.098597  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:17.098602  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:17.098666  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:17.136124  459056 cri.go:89] found id: ""
	I0510 19:31:17.136155  459056 logs.go:282] 0 containers: []
	W0510 19:31:17.136163  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:17.136169  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:17.136226  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:17.174746  459056 cri.go:89] found id: ""
	I0510 19:31:17.174773  459056 logs.go:282] 0 containers: []
	W0510 19:31:17.174781  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:17.174788  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:17.174853  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:17.211764  459056 cri.go:89] found id: ""
	I0510 19:31:17.211802  459056 logs.go:282] 0 containers: []
	W0510 19:31:17.211813  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:17.211822  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:17.211893  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:17.250173  459056 cri.go:89] found id: ""
	I0510 19:31:17.250220  459056 logs.go:282] 0 containers: []
	W0510 19:31:17.250231  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:17.250240  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:17.250307  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:17.288067  459056 cri.go:89] found id: ""
	I0510 19:31:17.288098  459056 logs.go:282] 0 containers: []
	W0510 19:31:17.288106  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:17.288113  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:17.288167  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:17.332174  459056 cri.go:89] found id: ""
	I0510 19:31:17.332201  459056 logs.go:282] 0 containers: []
	W0510 19:31:17.332210  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:17.332215  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:17.332279  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:17.368361  459056 cri.go:89] found id: ""
	I0510 19:31:17.368393  459056 logs.go:282] 0 containers: []
	W0510 19:31:17.368401  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:17.368414  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:17.368431  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:17.419140  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:17.419188  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:17.435060  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:17.435092  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:17.503946  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:17.503971  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:17.503985  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:17.577584  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:17.577636  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:20.122561  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:20.140245  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:20.140318  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:20.176963  459056 cri.go:89] found id: ""
	I0510 19:31:20.176997  459056 logs.go:282] 0 containers: []
	W0510 19:31:20.177006  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:20.177014  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:20.177082  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:20.214648  459056 cri.go:89] found id: ""
	I0510 19:31:20.214686  459056 logs.go:282] 0 containers: []
	W0510 19:31:20.214694  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:20.214700  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:20.214756  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:20.252572  459056 cri.go:89] found id: ""
	I0510 19:31:20.252603  459056 logs.go:282] 0 containers: []
	W0510 19:31:20.252610  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:20.252616  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:20.252690  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:20.292626  459056 cri.go:89] found id: ""
	I0510 19:31:20.292658  459056 logs.go:282] 0 containers: []
	W0510 19:31:20.292667  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:20.292673  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:20.292731  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:20.331394  459056 cri.go:89] found id: ""
	I0510 19:31:20.331426  459056 logs.go:282] 0 containers: []
	W0510 19:31:20.331433  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:20.331440  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:20.331493  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:20.369499  459056 cri.go:89] found id: ""
	I0510 19:31:20.369526  459056 logs.go:282] 0 containers: []
	W0510 19:31:20.369534  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:20.369541  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:20.369598  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:20.409063  459056 cri.go:89] found id: ""
	I0510 19:31:20.409101  459056 logs.go:282] 0 containers: []
	W0510 19:31:20.409119  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:20.409129  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:20.409202  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:20.448127  459056 cri.go:89] found id: ""
	I0510 19:31:20.448165  459056 logs.go:282] 0 containers: []
	W0510 19:31:20.448176  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:20.448192  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:20.448217  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:20.529717  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:20.529761  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:20.572287  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:20.572324  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:20.622908  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:20.622953  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:20.638966  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:20.639001  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:20.710197  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:23.211978  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:23.228993  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:23.229066  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:23.266521  459056 cri.go:89] found id: ""
	I0510 19:31:23.266554  459056 logs.go:282] 0 containers: []
	W0510 19:31:23.266563  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:23.266570  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:23.266624  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:23.305315  459056 cri.go:89] found id: ""
	I0510 19:31:23.305348  459056 logs.go:282] 0 containers: []
	W0510 19:31:23.305362  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:23.305371  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:23.305428  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:23.353734  459056 cri.go:89] found id: ""
	I0510 19:31:23.353764  459056 logs.go:282] 0 containers: []
	W0510 19:31:23.353773  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:23.353779  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:23.353836  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:23.392351  459056 cri.go:89] found id: ""
	I0510 19:31:23.392389  459056 logs.go:282] 0 containers: []
	W0510 19:31:23.392400  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:23.392408  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:23.392481  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:23.432302  459056 cri.go:89] found id: ""
	I0510 19:31:23.432338  459056 logs.go:282] 0 containers: []
	W0510 19:31:23.432349  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:23.432357  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:23.432423  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:23.470143  459056 cri.go:89] found id: ""
	I0510 19:31:23.470171  459056 logs.go:282] 0 containers: []
	W0510 19:31:23.470178  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:23.470184  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:23.470240  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:23.510123  459056 cri.go:89] found id: ""
	I0510 19:31:23.510151  459056 logs.go:282] 0 containers: []
	W0510 19:31:23.510158  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:23.510164  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:23.510218  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:23.548111  459056 cri.go:89] found id: ""
	I0510 19:31:23.548146  459056 logs.go:282] 0 containers: []
	W0510 19:31:23.548155  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:23.548165  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:23.548177  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:23.592214  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:23.592252  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:23.644384  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:23.644431  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:23.660004  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:23.660050  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:23.737601  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:23.737630  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:23.737646  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:26.318790  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:26.335345  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:26.335418  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:26.374890  459056 cri.go:89] found id: ""
	I0510 19:31:26.374925  459056 logs.go:282] 0 containers: []
	W0510 19:31:26.374939  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:26.374949  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:26.375022  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:26.416223  459056 cri.go:89] found id: ""
	I0510 19:31:26.416256  459056 logs.go:282] 0 containers: []
	W0510 19:31:26.416269  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:26.416279  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:26.416360  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:26.455431  459056 cri.go:89] found id: ""
	I0510 19:31:26.455472  459056 logs.go:282] 0 containers: []
	W0510 19:31:26.455485  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:26.455493  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:26.455563  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:26.493542  459056 cri.go:89] found id: ""
	I0510 19:31:26.493569  459056 logs.go:282] 0 containers: []
	W0510 19:31:26.493579  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:26.493588  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:26.493657  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:26.536613  459056 cri.go:89] found id: ""
	I0510 19:31:26.536642  459056 logs.go:282] 0 containers: []
	W0510 19:31:26.536651  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:26.536657  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:26.536742  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:26.574555  459056 cri.go:89] found id: ""
	I0510 19:31:26.574589  459056 logs.go:282] 0 containers: []
	W0510 19:31:26.574601  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:26.574610  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:26.574686  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:26.615726  459056 cri.go:89] found id: ""
	I0510 19:31:26.615767  459056 logs.go:282] 0 containers: []
	W0510 19:31:26.615779  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:26.615794  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:26.616130  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:26.658332  459056 cri.go:89] found id: ""
	I0510 19:31:26.658364  459056 logs.go:282] 0 containers: []
	W0510 19:31:26.658373  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:26.658382  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:26.658397  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:26.714050  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:26.714103  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:26.729247  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:26.729283  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:26.802056  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:26.802098  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:26.802117  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:26.880723  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:26.880777  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:29.424963  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:29.442400  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:29.442471  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:29.480974  459056 cri.go:89] found id: ""
	I0510 19:31:29.481014  459056 logs.go:282] 0 containers: []
	W0510 19:31:29.481025  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:29.481032  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:29.481103  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:29.517132  459056 cri.go:89] found id: ""
	I0510 19:31:29.517178  459056 logs.go:282] 0 containers: []
	W0510 19:31:29.517190  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:29.517199  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:29.517271  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:29.555573  459056 cri.go:89] found id: ""
	I0510 19:31:29.555610  459056 logs.go:282] 0 containers: []
	W0510 19:31:29.555621  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:29.555629  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:29.555706  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:29.591136  459056 cri.go:89] found id: ""
	I0510 19:31:29.591168  459056 logs.go:282] 0 containers: []
	W0510 19:31:29.591175  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:29.591181  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:29.591249  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:29.629174  459056 cri.go:89] found id: ""
	I0510 19:31:29.629205  459056 logs.go:282] 0 containers: []
	W0510 19:31:29.629214  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:29.629220  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:29.629285  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:29.666035  459056 cri.go:89] found id: ""
	I0510 19:31:29.666067  459056 logs.go:282] 0 containers: []
	W0510 19:31:29.666075  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:29.666081  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:29.666140  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:29.705842  459056 cri.go:89] found id: ""
	I0510 19:31:29.705872  459056 logs.go:282] 0 containers: []
	W0510 19:31:29.705880  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:29.705886  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:29.705964  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:29.743559  459056 cri.go:89] found id: ""
	I0510 19:31:29.743592  459056 logs.go:282] 0 containers: []
	W0510 19:31:29.743600  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:29.743623  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:29.743637  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:29.792453  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:29.792499  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:29.807725  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:29.807765  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:29.881784  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:29.881812  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:29.881825  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:29.954965  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:29.955014  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:32.502586  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:32.520169  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:32.520239  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:32.557308  459056 cri.go:89] found id: ""
	I0510 19:31:32.557342  459056 logs.go:282] 0 containers: []
	W0510 19:31:32.557350  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:32.557356  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:32.557411  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:32.595792  459056 cri.go:89] found id: ""
	I0510 19:31:32.595822  459056 logs.go:282] 0 containers: []
	W0510 19:31:32.595830  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:32.595835  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:32.595891  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:32.634389  459056 cri.go:89] found id: ""
	I0510 19:31:32.634429  459056 logs.go:282] 0 containers: []
	W0510 19:31:32.634437  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:32.634443  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:32.634517  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:32.675925  459056 cri.go:89] found id: ""
	I0510 19:31:32.675957  459056 logs.go:282] 0 containers: []
	W0510 19:31:32.675966  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:32.675973  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:32.676027  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:32.712730  459056 cri.go:89] found id: ""
	I0510 19:31:32.712767  459056 logs.go:282] 0 containers: []
	W0510 19:31:32.712776  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:32.712782  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:32.712843  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:32.749733  459056 cri.go:89] found id: ""
	I0510 19:31:32.749765  459056 logs.go:282] 0 containers: []
	W0510 19:31:32.749774  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:32.749781  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:32.749841  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:32.789481  459056 cri.go:89] found id: ""
	I0510 19:31:32.789513  459056 logs.go:282] 0 containers: []
	W0510 19:31:32.789521  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:32.789527  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:32.789586  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:32.828742  459056 cri.go:89] found id: ""
	I0510 19:31:32.828779  459056 logs.go:282] 0 containers: []
	W0510 19:31:32.828788  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:32.828798  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:32.828822  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:32.843753  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:32.843787  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:32.912953  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:32.912982  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:32.912995  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:32.989726  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:32.989770  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:33.040906  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:33.040943  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:35.593878  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:35.612402  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:35.612506  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:35.651532  459056 cri.go:89] found id: ""
	I0510 19:31:35.651562  459056 logs.go:282] 0 containers: []
	W0510 19:31:35.651571  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:35.651579  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:35.651671  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:35.689499  459056 cri.go:89] found id: ""
	I0510 19:31:35.689530  459056 logs.go:282] 0 containers: []
	W0510 19:31:35.689539  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:35.689546  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:35.689611  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:35.729195  459056 cri.go:89] found id: ""
	I0510 19:31:35.729230  459056 logs.go:282] 0 containers: []
	W0510 19:31:35.729239  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:35.729245  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:35.729314  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:35.767099  459056 cri.go:89] found id: ""
	I0510 19:31:35.767133  459056 logs.go:282] 0 containers: []
	W0510 19:31:35.767146  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:35.767151  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:35.767208  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:35.808130  459056 cri.go:89] found id: ""
	I0510 19:31:35.808166  459056 logs.go:282] 0 containers: []
	W0510 19:31:35.808179  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:35.808187  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:35.808261  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:35.845791  459056 cri.go:89] found id: ""
	I0510 19:31:35.845824  459056 logs.go:282] 0 containers: []
	W0510 19:31:35.845834  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:35.845841  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:35.846005  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:35.884049  459056 cri.go:89] found id: ""
	I0510 19:31:35.884083  459056 logs.go:282] 0 containers: []
	W0510 19:31:35.884093  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:35.884101  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:35.884182  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:35.921358  459056 cri.go:89] found id: ""
	I0510 19:31:35.921405  459056 logs.go:282] 0 containers: []
	W0510 19:31:35.921438  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:35.921454  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:35.921471  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:35.975819  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:35.975866  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:35.991683  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:35.991719  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:36.062576  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:36.062609  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:36.062692  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:36.144124  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:36.144171  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:38.688627  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:38.706961  459056 kubeadm.go:593] duration metric: took 4m1.80853031s to restartPrimaryControlPlane
	W0510 19:31:38.707088  459056 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0510 19:31:38.707129  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0510 19:31:42.433199  459056 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.726037031s)
	I0510 19:31:42.433304  459056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0510 19:31:42.450520  459056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0510 19:31:42.464170  459056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0510 19:31:42.478440  459056 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0510 19:31:42.478465  459056 kubeadm.go:157] found existing configuration files:
	
	I0510 19:31:42.478527  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0510 19:31:42.490756  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0510 19:31:42.490825  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0510 19:31:42.503476  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0510 19:31:42.516078  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0510 19:31:42.516162  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0510 19:31:42.529093  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0510 19:31:42.541784  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0510 19:31:42.541857  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0510 19:31:42.554154  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0510 19:31:42.566298  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0510 19:31:42.566366  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0510 19:31:42.579144  459056 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0510 19:31:42.808604  459056 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0510 19:33:39.237462  459056 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0510 19:33:39.237653  459056 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0510 19:33:39.240214  459056 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0510 19:33:39.240284  459056 kubeadm.go:310] [preflight] Running pre-flight checks
	I0510 19:33:39.240378  459056 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0510 19:33:39.240505  459056 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0510 19:33:39.240669  459056 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0510 19:33:39.240726  459056 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0510 19:33:39.242836  459056 out.go:235]   - Generating certificates and keys ...
	I0510 19:33:39.242931  459056 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0510 19:33:39.243010  459056 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0510 19:33:39.243103  459056 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0510 19:33:39.243180  459056 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0510 19:33:39.243286  459056 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0510 19:33:39.243366  459056 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0510 19:33:39.243440  459056 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0510 19:33:39.243544  459056 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0510 19:33:39.243662  459056 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0510 19:33:39.243769  459056 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0510 19:33:39.243830  459056 kubeadm.go:310] [certs] Using the existing "sa" key
	I0510 19:33:39.243905  459056 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0510 19:33:39.243972  459056 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0510 19:33:39.244018  459056 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0510 19:33:39.244072  459056 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0510 19:33:39.244132  459056 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0510 19:33:39.244227  459056 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0510 19:33:39.244322  459056 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0510 19:33:39.244375  459056 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0510 19:33:39.244459  459056 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0510 19:33:39.246586  459056 out.go:235]   - Booting up control plane ...
	I0510 19:33:39.246698  459056 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0510 19:33:39.246800  459056 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0510 19:33:39.246872  459056 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0510 19:33:39.246943  459056 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0510 19:33:39.247151  459056 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0510 19:33:39.247198  459056 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0510 19:33:39.247270  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:33:39.247423  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:33:39.247478  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:33:39.247671  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:33:39.247748  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:33:39.247894  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:33:39.247981  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:33:39.248179  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:33:39.248247  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:33:39.248415  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:33:39.248423  459056 kubeadm.go:310] 
	I0510 19:33:39.248461  459056 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0510 19:33:39.248497  459056 kubeadm.go:310] 		timed out waiting for the condition
	I0510 19:33:39.248507  459056 kubeadm.go:310] 
	I0510 19:33:39.248540  459056 kubeadm.go:310] 	This error is likely caused by:
	I0510 19:33:39.248570  459056 kubeadm.go:310] 		- The kubelet is not running
	I0510 19:33:39.248664  459056 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0510 19:33:39.248671  459056 kubeadm.go:310] 
	I0510 19:33:39.248767  459056 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0510 19:33:39.248803  459056 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0510 19:33:39.248832  459056 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0510 19:33:39.248839  459056 kubeadm.go:310] 
	I0510 19:33:39.248927  459056 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0510 19:33:39.249007  459056 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0510 19:33:39.249014  459056 kubeadm.go:310] 
	I0510 19:33:39.249164  459056 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0510 19:33:39.249288  459056 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0510 19:33:39.249351  459056 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0510 19:33:39.249408  459056 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0510 19:33:39.249484  459056 kubeadm.go:310] 
	W0510 19:33:39.249624  459056 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0510 19:33:39.249703  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0510 19:33:39.710770  459056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0510 19:33:39.729461  459056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0510 19:33:39.741531  459056 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0510 19:33:39.741556  459056 kubeadm.go:157] found existing configuration files:
	
	I0510 19:33:39.741617  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0510 19:33:39.752271  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0510 19:33:39.752339  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0510 19:33:39.764450  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0510 19:33:39.775142  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0510 19:33:39.775203  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0510 19:33:39.787008  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0510 19:33:39.798070  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0510 19:33:39.798143  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0510 19:33:39.809980  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0510 19:33:39.821862  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0510 19:33:39.821930  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0510 19:33:39.833890  459056 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0510 19:33:40.070673  459056 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0510 19:35:36.029186  459056 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0510 19:35:36.029314  459056 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0510 19:35:36.032027  459056 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0510 19:35:36.032078  459056 kubeadm.go:310] [preflight] Running pre-flight checks
	I0510 19:35:36.032177  459056 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0510 19:35:36.032280  459056 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0510 19:35:36.032361  459056 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0510 19:35:36.032446  459056 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0510 19:35:36.034371  459056 out.go:235]   - Generating certificates and keys ...
	I0510 19:35:36.034447  459056 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0510 19:35:36.034498  459056 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0510 19:35:36.034563  459056 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0510 19:35:36.034612  459056 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0510 19:35:36.034675  459056 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0510 19:35:36.034778  459056 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0510 19:35:36.034874  459056 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0510 19:35:36.034977  459056 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0510 19:35:36.035054  459056 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0510 19:35:36.035126  459056 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0510 19:35:36.035158  459056 kubeadm.go:310] [certs] Using the existing "sa" key
	I0510 19:35:36.035206  459056 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0510 19:35:36.035286  459056 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0510 19:35:36.035370  459056 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0510 19:35:36.035434  459056 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0510 19:35:36.035501  459056 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0510 19:35:36.035658  459056 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0510 19:35:36.035738  459056 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0510 19:35:36.035795  459056 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0510 19:35:36.035884  459056 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0510 19:35:36.037686  459056 out.go:235]   - Booting up control plane ...
	I0510 19:35:36.037791  459056 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0510 19:35:36.037869  459056 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0510 19:35:36.037934  459056 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0510 19:35:36.038008  459056 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0510 19:35:36.038231  459056 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0510 19:35:36.038305  459056 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0510 19:35:36.038398  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:35:36.038630  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:35:36.038727  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:35:36.038913  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:35:36.038987  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:35:36.039203  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:35:36.039326  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:35:36.039577  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:35:36.039655  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:35:36.039818  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:35:36.039825  459056 kubeadm.go:310] 
	I0510 19:35:36.039859  459056 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0510 19:35:36.039904  459056 kubeadm.go:310] 		timed out waiting for the condition
	I0510 19:35:36.039919  459056 kubeadm.go:310] 
	I0510 19:35:36.039948  459056 kubeadm.go:310] 	This error is likely caused by:
	I0510 19:35:36.039978  459056 kubeadm.go:310] 		- The kubelet is not running
	I0510 19:35:36.040071  459056 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0510 19:35:36.040078  459056 kubeadm.go:310] 
	I0510 19:35:36.040179  459056 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0510 19:35:36.040209  459056 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0510 19:35:36.040237  459056 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0510 19:35:36.040244  459056 kubeadm.go:310] 
	I0510 19:35:36.040337  459056 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0510 19:35:36.040419  459056 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0510 19:35:36.040442  459056 kubeadm.go:310] 
	I0510 19:35:36.040555  459056 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0510 19:35:36.040655  459056 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0510 19:35:36.040766  459056 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0510 19:35:36.040836  459056 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0510 19:35:36.040862  459056 kubeadm.go:310] 
	I0510 19:35:36.040906  459056 kubeadm.go:394] duration metric: took 7m59.202425038s to StartCluster
	I0510 19:35:36.040958  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:35:36.041023  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:35:36.097650  459056 cri.go:89] found id: ""
	I0510 19:35:36.097683  459056 logs.go:282] 0 containers: []
	W0510 19:35:36.097698  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:35:36.097708  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:35:36.097773  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:35:36.142587  459056 cri.go:89] found id: ""
	I0510 19:35:36.142619  459056 logs.go:282] 0 containers: []
	W0510 19:35:36.142627  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:35:36.142633  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:35:36.142702  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:35:36.186330  459056 cri.go:89] found id: ""
	I0510 19:35:36.186361  459056 logs.go:282] 0 containers: []
	W0510 19:35:36.186370  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:35:36.186376  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:35:36.186444  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:35:36.230965  459056 cri.go:89] found id: ""
	I0510 19:35:36.230994  459056 logs.go:282] 0 containers: []
	W0510 19:35:36.231001  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:35:36.231007  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:35:36.231062  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:35:36.276491  459056 cri.go:89] found id: ""
	I0510 19:35:36.276520  459056 logs.go:282] 0 containers: []
	W0510 19:35:36.276528  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:35:36.276534  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:35:36.276598  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:35:36.321937  459056 cri.go:89] found id: ""
	I0510 19:35:36.321971  459056 logs.go:282] 0 containers: []
	W0510 19:35:36.321980  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:35:36.321987  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:35:36.322050  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:35:36.364757  459056 cri.go:89] found id: ""
	I0510 19:35:36.364797  459056 logs.go:282] 0 containers: []
	W0510 19:35:36.364809  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:35:36.364818  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:35:36.364875  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:35:36.409488  459056 cri.go:89] found id: ""
	I0510 19:35:36.409523  459056 logs.go:282] 0 containers: []
	W0510 19:35:36.409532  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:35:36.409546  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:35:36.409561  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:35:36.462665  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:35:36.462705  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:35:36.478560  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:35:36.478591  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:35:36.555871  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:35:36.555904  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:35:36.555922  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:35:36.674559  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:35:36.674603  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0510 19:35:36.723413  459056 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0510 19:35:36.723488  459056 out.go:270] * 
	* 
	W0510 19:35:36.723574  459056 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0510 19:35:36.723589  459056 out.go:270] * 
	* 
	W0510 19:35:36.724458  459056 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0510 19:35:36.727493  459056 out.go:201] 
	W0510 19:35:36.728543  459056 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0510 19:35:36.728588  459056 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0510 19:35:36.728604  459056 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0510 19:35:36.729894  459056 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-089147 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-089147 -n old-k8s-version-089147
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-089147 -n old-k8s-version-089147: exit status 2 (282.343264ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-089147 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-089147 logs -n 25: (1.079644636s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p default-k8s-diff-port-544623       | default-k8s-diff-port-544623 | jenkins | v1.35.0 | 10 May 25 19:25 UTC | 10 May 25 19:25 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-544623 | jenkins | v1.35.0 | 10 May 25 19:25 UTC | 10 May 25 19:26 UTC |
	|         | default-k8s-diff-port-544623                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-089147        | old-k8s-version-089147       | jenkins | v1.35.0 | 10 May 25 19:25 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-483140            | embed-certs-483140           | jenkins | v1.35.0 | 10 May 25 19:25 UTC | 10 May 25 19:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-483140                                  | embed-certs-483140           | jenkins | v1.35.0 | 10 May 25 19:25 UTC | 10 May 25 19:27 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | no-preload-433152 image list                           | no-preload-433152            | jenkins | v1.35.0 | 10 May 25 19:26 UTC | 10 May 25 19:26 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-433152                                   | no-preload-433152            | jenkins | v1.35.0 | 10 May 25 19:26 UTC | 10 May 25 19:26 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-433152                                   | no-preload-433152            | jenkins | v1.35.0 | 10 May 25 19:26 UTC | 10 May 25 19:26 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-433152                                   | no-preload-433152            | jenkins | v1.35.0 | 10 May 25 19:26 UTC | 10 May 25 19:26 UTC |
	| delete  | -p no-preload-433152                                   | no-preload-433152            | jenkins | v1.35.0 | 10 May 25 19:26 UTC | 10 May 25 19:26 UTC |
	| image   | default-k8s-diff-port-544623                           | default-k8s-diff-port-544623 | jenkins | v1.35.0 | 10 May 25 19:26 UTC | 10 May 25 19:26 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-544623 | jenkins | v1.35.0 | 10 May 25 19:26 UTC | 10 May 25 19:26 UTC |
	|         | default-k8s-diff-port-544623                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-544623 | jenkins | v1.35.0 | 10 May 25 19:26 UTC | 10 May 25 19:26 UTC |
	|         | default-k8s-diff-port-544623                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-544623 | jenkins | v1.35.0 | 10 May 25 19:26 UTC | 10 May 25 19:26 UTC |
	|         | default-k8s-diff-port-544623                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-544623 | jenkins | v1.35.0 | 10 May 25 19:26 UTC | 10 May 25 19:26 UTC |
	|         | default-k8s-diff-port-544623                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-089147                              | old-k8s-version-089147       | jenkins | v1.35.0 | 10 May 25 19:27 UTC | 10 May 25 19:27 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-089147             | old-k8s-version-089147       | jenkins | v1.35.0 | 10 May 25 19:27 UTC | 10 May 25 19:27 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-089147                              | old-k8s-version-089147       | jenkins | v1.35.0 | 10 May 25 19:27 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-483140                 | embed-certs-483140           | jenkins | v1.35.0 | 10 May 25 19:27 UTC | 10 May 25 19:27 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-483140                                  | embed-certs-483140           | jenkins | v1.35.0 | 10 May 25 19:27 UTC | 10 May 25 19:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0                           |                              |         |         |                     |                     |
	| image   | embed-certs-483140 image list                          | embed-certs-483140           | jenkins | v1.35.0 | 10 May 25 19:28 UTC | 10 May 25 19:28 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-483140                                  | embed-certs-483140           | jenkins | v1.35.0 | 10 May 25 19:28 UTC | 10 May 25 19:28 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-483140                                  | embed-certs-483140           | jenkins | v1.35.0 | 10 May 25 19:28 UTC | 10 May 25 19:28 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-483140                                  | embed-certs-483140           | jenkins | v1.35.0 | 10 May 25 19:28 UTC | 10 May 25 19:28 UTC |
	| delete  | -p embed-certs-483140                                  | embed-certs-483140           | jenkins | v1.35.0 | 10 May 25 19:28 UTC | 10 May 25 19:28 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/05/10 19:27:23
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0510 19:27:23.885144  459268 out.go:345] Setting OutFile to fd 1 ...
	I0510 19:27:23.885480  459268 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 19:27:23.885497  459268 out.go:358] Setting ErrFile to fd 2...
	I0510 19:27:23.885501  459268 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 19:27:23.885719  459268 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-388787/.minikube/bin
	I0510 19:27:23.886293  459268 out.go:352] Setting JSON to false
	I0510 19:27:23.887364  459268 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":32992,"bootTime":1746872252,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 19:27:23.887483  459268 start.go:140] virtualization: kvm guest
	I0510 19:27:23.889943  459268 out.go:177] * [embed-certs-483140] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 19:27:23.891957  459268 notify.go:220] Checking for updates...
	I0510 19:27:23.891994  459268 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 19:27:23.894190  459268 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 19:27:23.896124  459268 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-388787/kubeconfig
	I0510 19:27:23.897923  459268 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-388787/.minikube
	I0510 19:27:23.899523  459268 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 19:27:23.901199  459268 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 19:27:23.903392  459268 config.go:182] Loaded profile config "embed-certs-483140": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 19:27:23.904060  459268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:27:23.904180  459268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:27:23.920190  459268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45251
	I0510 19:27:23.920695  459268 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:27:23.921217  459268 main.go:141] libmachine: Using API Version  1
	I0510 19:27:23.921240  459268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:27:23.921569  459268 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:27:23.921756  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	I0510 19:27:23.922029  459268 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 19:27:23.922349  459268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:27:23.922417  459268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:27:23.938240  459268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41081
	I0510 19:27:23.938810  459268 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:27:23.939433  459268 main.go:141] libmachine: Using API Version  1
	I0510 19:27:23.939468  459268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:27:23.939903  459268 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:27:23.940145  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	I0510 19:27:23.978372  459268 out.go:177] * Using the kvm2 driver based on existing profile
	I0510 19:27:20.282773  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:20.283336  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | unable to find current IP address of domain old-k8s-version-089147 in network mk-old-k8s-version-089147
	I0510 19:27:20.283406  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:27:20.283343  459091 retry.go:31] will retry after 3.189593727s: waiting for domain to come up
	I0510 19:27:23.618741  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:23.619115  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | unable to find current IP address of domain old-k8s-version-089147 in network mk-old-k8s-version-089147
	I0510 19:27:23.619143  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:27:23.619075  459091 retry.go:31] will retry after 3.237680008s: waiting for domain to come up
	I0510 19:27:23.979818  459268 start.go:304] selected driver: kvm2
	I0510 19:27:23.979843  459268 start.go:908] validating driver "kvm2" against &{Name:embed-certs-483140 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.33.0 ClusterName:embed-certs-483140 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.231 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Su
bnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 19:27:23.979977  459268 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 19:27:23.980756  459268 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 19:27:23.980839  459268 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20720-388787/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0510 19:27:23.997236  459268 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0510 19:27:23.997883  459268 start_flags.go:975] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0510 19:27:23.997935  459268 cni.go:84] Creating CNI manager for ""
	I0510 19:27:23.998008  459268 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0510 19:27:23.998078  459268 start.go:347] cluster config:
	{Name:embed-certs-483140 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:embed-certs-483140 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.231 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 19:27:23.998238  459268 iso.go:125] acquiring lock: {Name:mk19640015999219180c6685480547adf0c02201 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 19:27:24.000161  459268 out.go:177] * Starting "embed-certs-483140" primary control-plane node in "embed-certs-483140" cluster
	I0510 19:27:24.001573  459268 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
	I0510 19:27:24.001646  459268 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4
	I0510 19:27:24.001656  459268 cache.go:56] Caching tarball of preloaded images
	I0510 19:27:24.001770  459268 preload.go:172] Found /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0510 19:27:24.001787  459268 cache.go:59] Finished verifying existence of preloaded tar for v1.33.0 on crio
	I0510 19:27:24.001913  459268 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/embed-certs-483140/config.json ...
	I0510 19:27:24.002132  459268 start.go:360] acquireMachinesLock for embed-certs-483140: {Name:mk11499d7756d503a7a24339ad1a7f9ab9dc0fab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0510 19:27:28.400997  459268 start.go:364] duration metric: took 4.398817522s to acquireMachinesLock for "embed-certs-483140"
	I0510 19:27:28.401047  459268 start.go:96] Skipping create...Using existing machine configuration
	I0510 19:27:28.401054  459268 fix.go:54] fixHost starting: 
	I0510 19:27:28.401464  459268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:27:28.401519  459268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:27:28.419712  459268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44069
	I0510 19:27:28.420231  459268 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:27:28.420865  459268 main.go:141] libmachine: Using API Version  1
	I0510 19:27:28.420897  459268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:27:28.421274  459268 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:27:28.421549  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	I0510 19:27:28.421748  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetState
	I0510 19:27:28.423533  459268 fix.go:112] recreateIfNeeded on embed-certs-483140: state=Stopped err=<nil>
	I0510 19:27:28.423563  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	W0510 19:27:28.423744  459268 fix.go:138] unexpected machine state, will restart: <nil>
	I0510 19:27:28.425472  459268 out.go:177] * Restarting existing kvm2 VM for "embed-certs-483140" ...
	I0510 19:27:28.426613  459268 main.go:141] libmachine: (embed-certs-483140) Calling .Start
	I0510 19:27:28.426810  459268 main.go:141] libmachine: (embed-certs-483140) starting domain...
	I0510 19:27:28.426829  459268 main.go:141] libmachine: (embed-certs-483140) ensuring networks are active...
	I0510 19:27:28.427619  459268 main.go:141] libmachine: (embed-certs-483140) Ensuring network default is active
	I0510 19:27:28.428029  459268 main.go:141] libmachine: (embed-certs-483140) Ensuring network mk-embed-certs-483140 is active
	I0510 19:27:28.428436  459268 main.go:141] libmachine: (embed-certs-483140) getting domain XML...
	I0510 19:27:28.429330  459268 main.go:141] libmachine: (embed-certs-483140) creating domain...
	I0510 19:27:26.860579  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:26.861169  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has current primary IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:26.861235  459056 main.go:141] libmachine: (old-k8s-version-089147) found domain IP: 192.168.50.225
	I0510 19:27:26.861263  459056 main.go:141] libmachine: (old-k8s-version-089147) reserving static IP address...
	I0510 19:27:26.861678  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "old-k8s-version-089147", mac: "52:54:00:c5:c6:86", ip: "192.168.50.225"} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:26.861748  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | skip adding static IP to network mk-old-k8s-version-089147 - found existing host DHCP lease matching {name: "old-k8s-version-089147", mac: "52:54:00:c5:c6:86", ip: "192.168.50.225"}
	I0510 19:27:26.861769  459056 main.go:141] libmachine: (old-k8s-version-089147) reserved static IP address 192.168.50.225 for domain old-k8s-version-089147
	I0510 19:27:26.861785  459056 main.go:141] libmachine: (old-k8s-version-089147) waiting for SSH...
	I0510 19:27:26.861791  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | Getting to WaitForSSH function...
	I0510 19:27:26.863716  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:26.864074  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:26.864105  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:26.864224  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | Using SSH client type: external
	I0510 19:27:26.864249  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | Using SSH private key: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/old-k8s-version-089147/id_rsa (-rw-------)
	I0510 19:27:26.864275  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.225 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20720-388787/.minikube/machines/old-k8s-version-089147/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0510 19:27:26.864284  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | About to run SSH command:
	I0510 19:27:26.864292  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | exit 0
	I0510 19:27:26.992149  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | SSH cmd err, output: <nil>: 
	I0510 19:27:26.992596  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetConfigRaw
	I0510 19:27:26.993291  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetIP
	I0510 19:27:26.996245  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:26.996734  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:26.996760  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:26.996987  459056 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/config.json ...
	I0510 19:27:26.997231  459056 machine.go:93] provisionDockerMachine start ...
	I0510 19:27:26.997257  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .DriverName
	I0510 19:27:26.997484  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:26.999968  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.000439  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:27.000476  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.000707  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:27:27.000924  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:27.001051  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:27.001195  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:27:27.001309  459056 main.go:141] libmachine: Using SSH client type: native
	I0510 19:27:27.001588  459056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.225 22 <nil> <nil>}
	I0510 19:27:27.001603  459056 main.go:141] libmachine: About to run SSH command:
	hostname
	I0510 19:27:27.120348  459056 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0510 19:27:27.120385  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetMachineName
	I0510 19:27:27.120685  459056 buildroot.go:166] provisioning hostname "old-k8s-version-089147"
	I0510 19:27:27.120712  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetMachineName
	I0510 19:27:27.120937  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:27.123906  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.124166  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:27.124192  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.124346  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:27:27.124515  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:27.124641  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:27.124770  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:27:27.124903  459056 main.go:141] libmachine: Using SSH client type: native
	I0510 19:27:27.125130  459056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.225 22 <nil> <nil>}
	I0510 19:27:27.125146  459056 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-089147 && echo "old-k8s-version-089147" | sudo tee /etc/hostname
	I0510 19:27:27.254277  459056 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-089147
	
	I0510 19:27:27.254306  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:27.257358  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.257763  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:27.257793  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.258010  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:27:27.258221  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:27.258392  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:27.258550  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:27:27.258746  459056 main.go:141] libmachine: Using SSH client type: native
	I0510 19:27:27.258987  459056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.225 22 <nil> <nil>}
	I0510 19:27:27.259004  459056 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-089147' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-089147/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-089147' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0510 19:27:27.383141  459056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0510 19:27:27.383177  459056 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20720-388787/.minikube CaCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20720-388787/.minikube}
	I0510 19:27:27.383245  459056 buildroot.go:174] setting up certificates
	I0510 19:27:27.383268  459056 provision.go:84] configureAuth start
	I0510 19:27:27.383282  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetMachineName
	I0510 19:27:27.383632  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetIP
	I0510 19:27:27.386412  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.386733  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:27.386760  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.386920  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:27.388990  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.389308  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:27.389346  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.389489  459056 provision.go:143] copyHostCerts
	I0510 19:27:27.389586  459056 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem, removing ...
	I0510 19:27:27.389611  459056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem
	I0510 19:27:27.389674  459056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem (1675 bytes)
	I0510 19:27:27.389763  459056 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem, removing ...
	I0510 19:27:27.389771  459056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem
	I0510 19:27:27.389797  459056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem (1078 bytes)
	I0510 19:27:27.389845  459056 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem, removing ...
	I0510 19:27:27.389852  459056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem
	I0510 19:27:27.389873  459056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem (1123 bytes)
	I0510 19:27:27.389917  459056 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-089147 san=[127.0.0.1 192.168.50.225 localhost minikube old-k8s-version-089147]
	I0510 19:27:27.706220  459056 provision.go:177] copyRemoteCerts
	I0510 19:27:27.706291  459056 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0510 19:27:27.706321  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:27.709279  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.709662  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:27.709704  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.709901  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:27:27.710147  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:27.710312  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:27:27.710453  459056 sshutil.go:53] new ssh client: &{IP:192.168.50.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/old-k8s-version-089147/id_rsa Username:docker}
	I0510 19:27:27.796192  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0510 19:27:27.826223  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0510 19:27:27.856165  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0510 19:27:27.885803  459056 provision.go:87] duration metric: took 502.517549ms to configureAuth
	I0510 19:27:27.885844  459056 buildroot.go:189] setting minikube options for container-runtime
	I0510 19:27:27.886049  459056 config.go:182] Loaded profile config "old-k8s-version-089147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0510 19:27:27.886126  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:27.888892  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.889274  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:27.889304  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.889432  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:27:27.889662  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:27.889842  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:27.890001  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:27:27.890137  459056 main.go:141] libmachine: Using SSH client type: native
	I0510 19:27:27.890398  459056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.225 22 <nil> <nil>}
	I0510 19:27:27.890414  459056 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0510 19:27:28.145754  459056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0510 19:27:28.145780  459056 machine.go:96] duration metric: took 1.148533327s to provisionDockerMachine
	I0510 19:27:28.145793  459056 start.go:293] postStartSetup for "old-k8s-version-089147" (driver="kvm2")
	I0510 19:27:28.145805  459056 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0510 19:27:28.145843  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .DriverName
	I0510 19:27:28.146213  459056 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0510 19:27:28.146241  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:28.148935  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.149310  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:28.149338  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.149442  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:27:28.149630  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:28.149794  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:27:28.149969  459056 sshutil.go:53] new ssh client: &{IP:192.168.50.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/old-k8s-version-089147/id_rsa Username:docker}
	I0510 19:27:28.237429  459056 ssh_runner.go:195] Run: cat /etc/os-release
	I0510 19:27:28.242504  459056 info.go:137] Remote host: Buildroot 2024.11.2
	I0510 19:27:28.242535  459056 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-388787/.minikube/addons for local assets ...
	I0510 19:27:28.242600  459056 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-388787/.minikube/files for local assets ...
	I0510 19:27:28.242694  459056 filesync.go:149] local asset: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem -> 3959802.pem in /etc/ssl/certs
	I0510 19:27:28.242795  459056 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0510 19:27:28.255581  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem --> /etc/ssl/certs/3959802.pem (1708 bytes)
	I0510 19:27:28.285383  459056 start.go:296] duration metric: took 139.572888ms for postStartSetup
	I0510 19:27:28.285430  459056 fix.go:56] duration metric: took 19.171545731s for fixHost
	I0510 19:27:28.285452  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:28.288861  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.289256  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:28.289288  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.289472  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:27:28.289747  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:28.289968  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:28.290122  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:27:28.290275  459056 main.go:141] libmachine: Using SSH client type: native
	I0510 19:27:28.290504  459056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.225 22 <nil> <nil>}
	I0510 19:27:28.290514  459056 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0510 19:27:28.400790  459056 main.go:141] libmachine: SSH cmd err, output: <nil>: 1746905248.354737003
	
	I0510 19:27:28.400820  459056 fix.go:216] guest clock: 1746905248.354737003
	I0510 19:27:28.400830  459056 fix.go:229] Guest: 2025-05-10 19:27:28.354737003 +0000 UTC Remote: 2025-05-10 19:27:28.285433906 +0000 UTC m=+19.332417949 (delta=69.303097ms)
	I0510 19:27:28.400874  459056 fix.go:200] guest clock delta is within tolerance: 69.303097ms
	I0510 19:27:28.400901  459056 start.go:83] releasing machines lock for "old-k8s-version-089147", held for 19.287012994s
	I0510 19:27:28.400943  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .DriverName
	I0510 19:27:28.401246  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetIP
	I0510 19:27:28.404469  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.404985  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:28.405012  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.405227  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .DriverName
	I0510 19:27:28.405870  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .DriverName
	I0510 19:27:28.406067  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .DriverName
	I0510 19:27:28.406182  459056 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0510 19:27:28.406225  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:28.406371  459056 ssh_runner.go:195] Run: cat /version.json
	I0510 19:27:28.406414  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:28.409133  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.409451  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.409485  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:28.409508  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.409700  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:27:28.409895  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:28.409939  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:28.409971  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.410074  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:27:28.410144  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:27:28.410238  459056 sshutil.go:53] new ssh client: &{IP:192.168.50.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/old-k8s-version-089147/id_rsa Username:docker}
	I0510 19:27:28.410313  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:28.410431  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:27:28.410556  459056 sshutil.go:53] new ssh client: &{IP:192.168.50.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/old-k8s-version-089147/id_rsa Username:docker}
	I0510 19:27:28.522881  459056 ssh_runner.go:195] Run: systemctl --version
	I0510 19:27:28.529679  459056 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0510 19:27:28.679208  459056 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0510 19:27:28.686449  459056 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0510 19:27:28.686542  459056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0510 19:27:28.706391  459056 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0510 19:27:28.706422  459056 start.go:495] detecting cgroup driver to use...
	I0510 19:27:28.706502  459056 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0510 19:27:28.725500  459056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0510 19:27:28.743141  459056 docker.go:225] disabling cri-docker service (if available) ...
	I0510 19:27:28.743218  459056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0510 19:27:28.763489  459056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0510 19:27:28.782362  459056 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0510 19:27:28.930849  459056 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0510 19:27:29.145684  459056 docker.go:241] disabling docker service ...
	I0510 19:27:29.145777  459056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0510 19:27:29.162572  459056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0510 19:27:29.177892  459056 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0510 19:27:29.337238  459056 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0510 19:27:29.498230  459056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0510 19:27:29.515221  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0510 19:27:29.539326  459056 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0510 19:27:29.539400  459056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:27:29.551931  459056 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0510 19:27:29.552027  459056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:27:29.563727  459056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:27:29.576495  459056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:27:29.589274  459056 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0510 19:27:29.602567  459056 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0510 19:27:29.613569  459056 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0510 19:27:29.613666  459056 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0510 19:27:29.631475  459056 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0510 19:27:29.646992  459056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 19:27:29.783415  459056 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0510 19:27:29.908799  459056 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0510 19:27:29.908871  459056 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0510 19:27:29.916611  459056 start.go:563] Will wait 60s for crictl version
	I0510 19:27:29.916678  459056 ssh_runner.go:195] Run: which crictl
	I0510 19:27:29.922342  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0510 19:27:29.970957  459056 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0510 19:27:29.971075  459056 ssh_runner.go:195] Run: crio --version
	I0510 19:27:30.013260  459056 ssh_runner.go:195] Run: crio --version
	I0510 19:27:30.045551  459056 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0510 19:27:29.772968  459268 main.go:141] libmachine: (embed-certs-483140) waiting for IP...
	I0510 19:27:29.773852  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:29.774282  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:29.774439  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:29.774308  459321 retry.go:31] will retry after 290.306519ms: waiting for domain to come up
	I0510 19:27:30.066100  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:30.066611  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:30.066646  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:30.066565  459321 retry.go:31] will retry after 275.607152ms: waiting for domain to come up
	I0510 19:27:30.344347  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:30.345208  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:30.345242  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:30.345116  459321 retry.go:31] will retry after 431.583413ms: waiting for domain to come up
	I0510 19:27:30.779076  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:30.779843  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:30.779882  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:30.779780  459321 retry.go:31] will retry after 472.118095ms: waiting for domain to come up
	I0510 19:27:31.253280  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:31.253935  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:31.253963  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:31.253906  459321 retry.go:31] will retry after 565.053718ms: waiting for domain to come up
	I0510 19:27:31.820497  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:31.821065  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:31.821097  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:31.821039  459321 retry.go:31] will retry after 714.111732ms: waiting for domain to come up
	I0510 19:27:32.536460  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:32.537050  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:32.537080  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:32.537000  459321 retry.go:31] will retry after 1.161843323s: waiting for domain to come up
	I0510 19:27:33.701019  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:33.701583  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:33.701613  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:33.701550  459321 retry.go:31] will retry after 996.121621ms: waiting for domain to come up
	I0510 19:27:30.046696  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetIP
	I0510 19:27:30.049916  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:30.050298  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:30.050343  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:30.050593  459056 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0510 19:27:30.055795  459056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 19:27:30.072862  459056 kubeadm.go:875] updating cluster {Name:old-k8s-version-089147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-089147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.225 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0510 19:27:30.073023  459056 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0510 19:27:30.073092  459056 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 19:27:30.136655  459056 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0510 19:27:30.136733  459056 ssh_runner.go:195] Run: which lz4
	I0510 19:27:30.141756  459056 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0510 19:27:30.146784  459056 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0510 19:27:30.146832  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0510 19:27:32.084982  459056 crio.go:462] duration metric: took 1.943253158s to copy over tarball
	I0510 19:27:32.085084  459056 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0510 19:27:34.700012  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:34.700655  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:34.700709  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:34.700617  459321 retry.go:31] will retry after 1.33170267s: waiting for domain to come up
	I0510 19:27:36.033761  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:36.034412  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:36.034447  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:36.034366  459321 retry.go:31] will retry after 2.129430607s: waiting for domain to come up
	I0510 19:27:38.166385  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:38.167048  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:38.167074  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:38.167010  459321 retry.go:31] will retry after 1.898585133s: waiting for domain to come up
	I0510 19:27:34.680248  459056 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.595132142s)
	I0510 19:27:34.680275  459056 crio.go:469] duration metric: took 2.595258666s to extract the tarball
	I0510 19:27:34.680284  459056 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0510 19:27:34.725856  459056 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 19:27:34.769530  459056 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0510 19:27:34.769567  459056 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0510 19:27:34.769639  459056 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0510 19:27:34.769682  459056 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:27:34.769696  459056 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:27:34.769712  459056 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0510 19:27:34.769686  459056 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0510 19:27:34.769766  459056 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:27:34.769779  459056 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0510 19:27:34.769798  459056 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:27:34.771393  459056 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:27:34.771413  459056 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0510 19:27:34.771433  459056 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0510 19:27:34.771391  459056 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:27:34.771454  459056 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:27:34.771457  459056 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:27:34.771488  459056 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0510 19:27:34.771522  459056 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0510 19:27:34.903898  459056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:27:34.909532  459056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0510 19:27:34.909958  459056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:27:34.920714  459056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:27:34.927038  459056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0510 19:27:34.932543  459056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0510 19:27:34.939391  459056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:27:35.035164  459056 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0510 19:27:35.035225  459056 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:27:35.035308  459056 ssh_runner.go:195] Run: which crictl
	I0510 19:27:35.046705  459056 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0510 19:27:35.046773  459056 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0510 19:27:35.046831  459056 ssh_runner.go:195] Run: which crictl
	I0510 19:27:35.102600  459056 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0510 19:27:35.102657  459056 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:27:35.102728  459056 ssh_runner.go:195] Run: which crictl
	I0510 19:27:35.114127  459056 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0510 19:27:35.114197  459056 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:27:35.114220  459056 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0510 19:27:35.114255  459056 ssh_runner.go:195] Run: which crictl
	I0510 19:27:35.114262  459056 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0510 19:27:35.114305  459056 ssh_runner.go:195] Run: which crictl
	I0510 19:27:35.114526  459056 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0510 19:27:35.114562  459056 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0510 19:27:35.114596  459056 ssh_runner.go:195] Run: which crictl
	I0510 19:27:35.135454  459056 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0510 19:27:35.135500  459056 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:27:35.135549  459056 ssh_runner.go:195] Run: which crictl
	I0510 19:27:35.135570  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:27:35.135627  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0510 19:27:35.135673  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:27:35.135728  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0510 19:27:35.135753  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:27:35.135782  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0510 19:27:35.246929  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:27:35.246999  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0510 19:27:35.304129  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:27:35.304183  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:27:35.304193  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:27:35.304231  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0510 19:27:35.304278  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0510 19:27:35.381894  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0510 19:27:35.381939  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:27:35.482712  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0510 19:27:35.482788  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:27:35.482823  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:27:35.482858  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0510 19:27:35.482947  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:27:35.526146  459056 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0510 19:27:35.557215  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:27:35.649079  459056 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0510 19:27:35.649160  459056 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0510 19:27:35.649222  459056 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0510 19:27:35.649256  459056 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0510 19:27:35.649351  459056 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0510 19:27:35.667931  459056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0510 19:27:35.671336  459056 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0510 19:27:35.818843  459056 cache_images.go:92] duration metric: took 1.049254698s to LoadCachedImages
	W0510 19:27:35.818925  459056 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0510 19:27:35.818936  459056 kubeadm.go:926] updating node { 192.168.50.225 8443 v1.20.0 crio true true} ...
	I0510 19:27:35.819071  459056 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-089147 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-089147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0510 19:27:35.819178  459056 ssh_runner.go:195] Run: crio config
	I0510 19:27:35.871053  459056 cni.go:84] Creating CNI manager for ""
	I0510 19:27:35.871078  459056 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0510 19:27:35.871088  459056 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0510 19:27:35.871108  459056 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.225 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-089147 NodeName:old-k8s-version-089147 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.225"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.225 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0510 19:27:35.871325  459056 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.225
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-089147"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.225
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.225"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0510 19:27:35.871410  459056 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0510 19:27:35.884778  459056 binaries.go:44] Found k8s binaries, skipping transfer
	I0510 19:27:35.884850  459056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0510 19:27:35.897755  459056 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0510 19:27:35.920392  459056 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0510 19:27:35.944066  459056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0510 19:27:35.969513  459056 ssh_runner.go:195] Run: grep 192.168.50.225	control-plane.minikube.internal$ /etc/hosts
	I0510 19:27:35.973968  459056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.225	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 19:27:35.989113  459056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 19:27:36.126144  459056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 19:27:36.161368  459056 certs.go:68] Setting up /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147 for IP: 192.168.50.225
	I0510 19:27:36.161393  459056 certs.go:194] generating shared ca certs ...
	I0510 19:27:36.161414  459056 certs.go:226] acquiring lock for ca certs: {Name:mk8db74782205da4ac57ef815dd495cda255251a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:27:36.161602  459056 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20720-388787/.minikube/ca.key
	I0510 19:27:36.161660  459056 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.key
	I0510 19:27:36.161675  459056 certs.go:256] generating profile certs ...
	I0510 19:27:36.161815  459056 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/client.key
	I0510 19:27:36.161897  459056 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/apiserver.key.3362ca92
	I0510 19:27:36.161951  459056 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/proxy-client.key
	I0510 19:27:36.162093  459056 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980.pem (1338 bytes)
	W0510 19:27:36.162134  459056 certs.go:480] ignoring /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980_empty.pem, impossibly tiny 0 bytes
	I0510 19:27:36.162148  459056 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem (1679 bytes)
	I0510 19:27:36.162186  459056 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem (1078 bytes)
	I0510 19:27:36.162219  459056 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem (1123 bytes)
	I0510 19:27:36.162251  459056 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem (1675 bytes)
	I0510 19:27:36.162305  459056 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem (1708 bytes)
	I0510 19:27:36.163029  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0510 19:27:36.207434  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0510 19:27:36.254337  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0510 19:27:36.302029  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0510 19:27:36.340123  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0510 19:27:36.372457  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0510 19:27:36.417695  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0510 19:27:36.454687  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0510 19:27:36.491453  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0510 19:27:36.527708  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980.pem --> /usr/share/ca-certificates/395980.pem (1338 bytes)
	I0510 19:27:36.566188  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem --> /usr/share/ca-certificates/3959802.pem (1708 bytes)
	I0510 19:27:36.605695  459056 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0510 19:27:36.633416  459056 ssh_runner.go:195] Run: openssl version
	I0510 19:27:36.640812  459056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0510 19:27:36.655287  459056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0510 19:27:36.660996  459056 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 10 17:52 /usr/share/ca-certificates/minikubeCA.pem
	I0510 19:27:36.661078  459056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0510 19:27:36.671509  459056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0510 19:27:36.685341  459056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/395980.pem && ln -fs /usr/share/ca-certificates/395980.pem /etc/ssl/certs/395980.pem"
	I0510 19:27:36.701195  459056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/395980.pem
	I0510 19:27:36.707338  459056 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 10 18:00 /usr/share/ca-certificates/395980.pem
	I0510 19:27:36.707426  459056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/395980.pem
	I0510 19:27:36.715832  459056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/395980.pem /etc/ssl/certs/51391683.0"
	I0510 19:27:36.730499  459056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3959802.pem && ln -fs /usr/share/ca-certificates/3959802.pem /etc/ssl/certs/3959802.pem"
	I0510 19:27:36.745937  459056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3959802.pem
	I0510 19:27:36.753124  459056 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 10 18:00 /usr/share/ca-certificates/3959802.pem
	I0510 19:27:36.753219  459056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3959802.pem
	I0510 19:27:36.763162  459056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3959802.pem /etc/ssl/certs/3ec20f2e.0"
	I0510 19:27:36.777980  459056 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0510 19:27:36.784377  459056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0510 19:27:36.792871  459056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0510 19:27:36.801028  459056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0510 19:27:36.809570  459056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0510 19:27:36.820430  459056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0510 19:27:36.830234  459056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0510 19:27:36.838492  459056 kubeadm.go:392] StartCluster: {Name:old-k8s-version-089147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-089147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.225 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 19:27:36.838628  459056 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0510 19:27:36.838710  459056 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0510 19:27:36.883637  459056 cri.go:89] found id: ""
	I0510 19:27:36.883721  459056 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0510 19:27:36.898381  459056 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0510 19:27:36.898418  459056 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0510 19:27:36.898479  459056 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0510 19:27:36.911968  459056 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0510 19:27:36.912423  459056 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-089147" does not appear in /home/jenkins/minikube-integration/20720-388787/kubeconfig
	I0510 19:27:36.912622  459056 kubeconfig.go:62] /home/jenkins/minikube-integration/20720-388787/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-089147" cluster setting kubeconfig missing "old-k8s-version-089147" context setting]
	I0510 19:27:36.912933  459056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/kubeconfig: {Name:mk5ad7285fe4c17b2779ea6d5a539f101fe94797 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:27:36.978461  459056 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0510 19:27:36.992010  459056 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.50.225
	I0510 19:27:36.992058  459056 kubeadm.go:1152] stopping kube-system containers ...
	I0510 19:27:36.992090  459056 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0510 19:27:36.992157  459056 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0510 19:27:37.036332  459056 cri.go:89] found id: ""
	I0510 19:27:37.036417  459056 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0510 19:27:37.061304  459056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0510 19:27:37.077360  459056 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0510 19:27:37.077388  459056 kubeadm.go:157] found existing configuration files:
	
	I0510 19:27:37.077447  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0510 19:27:37.091136  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0510 19:27:37.091207  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0510 19:27:37.108190  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0510 19:27:37.122863  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0510 19:27:37.122925  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0510 19:27:37.135581  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0510 19:27:37.151096  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0510 19:27:37.151176  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0510 19:27:37.163976  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0510 19:27:37.176297  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0510 19:27:37.176382  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0510 19:27:37.189484  459056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0510 19:27:37.202907  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:27:37.370636  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:27:38.101468  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:27:38.357025  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:27:38.472109  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:27:38.566036  459056 api_server.go:52] waiting for apiserver process to appear ...
	I0510 19:27:38.566163  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:40.067566  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:40.068079  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:40.068151  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:40.068067  459321 retry.go:31] will retry after 3.236923309s: waiting for domain to come up
	I0510 19:27:43.308549  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:43.309080  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:43.309112  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:43.309038  459321 retry.go:31] will retry after 2.981327362s: waiting for domain to come up
	I0510 19:27:39.066944  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:39.566854  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:40.067066  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:40.567198  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:41.066452  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:41.566381  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:42.066951  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:42.567170  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:43.067308  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:43.566541  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:46.293587  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:46.294125  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:46.294169  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:46.294106  459321 retry.go:31] will retry after 3.49595936s: waiting for domain to come up
	I0510 19:27:44.067005  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:44.566869  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:45.066432  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:45.567107  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:46.066205  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:46.566600  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:47.066806  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:47.567316  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:48.067123  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:48.566636  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:49.792274  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:49.792796  459268 main.go:141] libmachine: (embed-certs-483140) found domain IP: 192.168.72.231
	I0510 19:27:49.792820  459268 main.go:141] libmachine: (embed-certs-483140) reserving static IP address...
	I0510 19:27:49.792830  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has current primary IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:49.793260  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "embed-certs-483140", mac: "52:54:00:2c:f8:9f", ip: "192.168.72.231"} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:49.793283  459268 main.go:141] libmachine: (embed-certs-483140) reserved static IP address 192.168.72.231 for domain embed-certs-483140
	I0510 19:27:49.793301  459268 main.go:141] libmachine: (embed-certs-483140) DBG | skip adding static IP to network mk-embed-certs-483140 - found existing host DHCP lease matching {name: "embed-certs-483140", mac: "52:54:00:2c:f8:9f", ip: "192.168.72.231"}
	I0510 19:27:49.793315  459268 main.go:141] libmachine: (embed-certs-483140) DBG | Getting to WaitForSSH function...
	I0510 19:27:49.793330  459268 main.go:141] libmachine: (embed-certs-483140) waiting for SSH...
	I0510 19:27:49.795680  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:49.796092  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:49.796115  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:49.796237  459268 main.go:141] libmachine: (embed-certs-483140) DBG | Using SSH client type: external
	I0510 19:27:49.796292  459268 main.go:141] libmachine: (embed-certs-483140) DBG | Using SSH private key: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/embed-certs-483140/id_rsa (-rw-------)
	I0510 19:27:49.796323  459268 main.go:141] libmachine: (embed-certs-483140) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.231 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20720-388787/.minikube/machines/embed-certs-483140/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0510 19:27:49.796357  459268 main.go:141] libmachine: (embed-certs-483140) DBG | About to run SSH command:
	I0510 19:27:49.796369  459268 main.go:141] libmachine: (embed-certs-483140) DBG | exit 0
	I0510 19:27:49.923834  459268 main.go:141] libmachine: (embed-certs-483140) DBG | SSH cmd err, output: <nil>: 
	I0510 19:27:49.924265  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetConfigRaw
	I0510 19:27:49.924904  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetIP
	I0510 19:27:49.928115  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:49.928557  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:49.928589  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:49.928844  459268 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/embed-certs-483140/config.json ...
	I0510 19:27:49.929086  459268 machine.go:93] provisionDockerMachine start ...
	I0510 19:27:49.929120  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	I0510 19:27:49.929435  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:27:49.931867  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:49.932242  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:49.932278  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:49.932387  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHPort
	I0510 19:27:49.932602  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:49.932748  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:49.932878  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHUsername
	I0510 19:27:49.933115  459268 main.go:141] libmachine: Using SSH client type: native
	I0510 19:27:49.933388  459268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.72.231 22 <nil> <nil>}
	I0510 19:27:49.933401  459268 main.go:141] libmachine: About to run SSH command:
	hostname
	I0510 19:27:50.044168  459268 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0510 19:27:50.044204  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetMachineName
	I0510 19:27:50.044481  459268 buildroot.go:166] provisioning hostname "embed-certs-483140"
	I0510 19:27:50.044509  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetMachineName
	I0510 19:27:50.044693  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:27:50.047840  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:50.048210  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:50.048232  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:50.048417  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHPort
	I0510 19:27:50.048632  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:50.048790  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:50.048942  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHUsername
	I0510 19:27:50.049085  459268 main.go:141] libmachine: Using SSH client type: native
	I0510 19:27:50.049295  459268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.72.231 22 <nil> <nil>}
	I0510 19:27:50.049308  459268 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-483140 && echo "embed-certs-483140" | sudo tee /etc/hostname
	I0510 19:27:50.174048  459268 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-483140
	
	I0510 19:27:50.174083  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:27:50.177045  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:50.177447  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:50.177480  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:50.177653  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHPort
	I0510 19:27:50.177869  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:50.178002  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:50.178154  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHUsername
	I0510 19:27:50.178284  459268 main.go:141] libmachine: Using SSH client type: native
	I0510 19:27:50.178498  459268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.72.231 22 <nil> <nil>}
	I0510 19:27:50.178514  459268 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-483140' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-483140/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-483140' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0510 19:27:50.298589  459268 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0510 19:27:50.298629  459268 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20720-388787/.minikube CaCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20720-388787/.minikube}
	I0510 19:27:50.298678  459268 buildroot.go:174] setting up certificates
	I0510 19:27:50.298688  459268 provision.go:84] configureAuth start
	I0510 19:27:50.298698  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetMachineName
	I0510 19:27:50.299119  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetIP
	I0510 19:27:50.301907  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:50.302237  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:50.302256  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:50.302394  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:27:50.305191  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:50.305523  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:50.305545  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:50.305718  459268 provision.go:143] copyHostCerts
	I0510 19:27:50.305792  459268 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem, removing ...
	I0510 19:27:50.305807  459268 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem
	I0510 19:27:50.305860  459268 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem (1078 bytes)
	I0510 19:27:50.305962  459268 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem, removing ...
	I0510 19:27:50.305970  459268 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem
	I0510 19:27:50.306000  459268 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem (1123 bytes)
	I0510 19:27:50.306073  459268 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem, removing ...
	I0510 19:27:50.306087  459268 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem
	I0510 19:27:50.306105  459268 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem (1675 bytes)
	I0510 19:27:50.306169  459268 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem org=jenkins.embed-certs-483140 san=[127.0.0.1 192.168.72.231 embed-certs-483140 localhost minikube]
	I0510 19:27:50.615586  459268 provision.go:177] copyRemoteCerts
	I0510 19:27:50.615663  459268 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0510 19:27:50.615691  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:27:50.618693  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:50.619094  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:50.619124  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:50.619296  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHPort
	I0510 19:27:50.619467  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:50.619613  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHUsername
	I0510 19:27:50.619728  459268 sshutil.go:53] new ssh client: &{IP:192.168.72.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/embed-certs-483140/id_rsa Username:docker}
	I0510 19:27:50.709319  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0510 19:27:50.739864  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0510 19:27:50.769743  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0510 19:27:50.799032  459268 provision.go:87] duration metric: took 500.330996ms to configureAuth
	I0510 19:27:50.799064  459268 buildroot.go:189] setting minikube options for container-runtime
	I0510 19:27:50.799354  459268 config.go:182] Loaded profile config "embed-certs-483140": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 19:27:50.799434  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:27:50.802338  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:50.802753  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:50.802796  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:50.802915  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHPort
	I0510 19:27:50.803096  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:50.803296  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:50.803423  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHUsername
	I0510 19:27:50.803591  459268 main.go:141] libmachine: Using SSH client type: native
	I0510 19:27:50.803807  459268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.72.231 22 <nil> <nil>}
	I0510 19:27:50.803830  459268 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0510 19:27:51.055936  459268 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0510 19:27:51.055969  459268 machine.go:96] duration metric: took 1.126866865s to provisionDockerMachine
	I0510 19:27:51.055989  459268 start.go:293] postStartSetup for "embed-certs-483140" (driver="kvm2")
	I0510 19:27:51.056002  459268 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0510 19:27:51.056026  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	I0510 19:27:51.056453  459268 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0510 19:27:51.056494  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:27:51.059782  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:51.060458  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:51.060503  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:51.060671  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHPort
	I0510 19:27:51.061017  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:51.061277  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHUsername
	I0510 19:27:51.061481  459268 sshutil.go:53] new ssh client: &{IP:192.168.72.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/embed-certs-483140/id_rsa Username:docker}
	I0510 19:27:51.153337  459268 ssh_runner.go:195] Run: cat /etc/os-release
	I0510 19:27:51.158738  459268 info.go:137] Remote host: Buildroot 2024.11.2
	I0510 19:27:51.158782  459268 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-388787/.minikube/addons for local assets ...
	I0510 19:27:51.158876  459268 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-388787/.minikube/files for local assets ...
	I0510 19:27:51.158982  459268 filesync.go:149] local asset: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem -> 3959802.pem in /etc/ssl/certs
	I0510 19:27:51.159078  459268 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0510 19:27:51.171765  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem --> /etc/ssl/certs/3959802.pem (1708 bytes)
	I0510 19:27:51.204973  459268 start.go:296] duration metric: took 148.937348ms for postStartSetup
	I0510 19:27:51.205024  459268 fix.go:56] duration metric: took 22.803970548s for fixHost
	I0510 19:27:51.205051  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:27:51.208258  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:51.208723  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:51.208748  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:51.208995  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHPort
	I0510 19:27:51.209219  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:51.209421  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:51.209566  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHUsername
	I0510 19:27:51.209735  459268 main.go:141] libmachine: Using SSH client type: native
	I0510 19:27:51.209940  459268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.72.231 22 <nil> <nil>}
	I0510 19:27:51.209947  459268 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0510 19:27:51.320755  459268 main.go:141] libmachine: SSH cmd err, output: <nil>: 1746905271.291613089
	
	I0510 19:27:51.320787  459268 fix.go:216] guest clock: 1746905271.291613089
	I0510 19:27:51.320798  459268 fix.go:229] Guest: 2025-05-10 19:27:51.291613089 +0000 UTC Remote: 2025-05-10 19:27:51.20502902 +0000 UTC m=+27.360293338 (delta=86.584069ms)
	I0510 19:27:51.320828  459268 fix.go:200] guest clock delta is within tolerance: 86.584069ms
	I0510 19:27:51.320835  459268 start.go:83] releasing machines lock for "embed-certs-483140", held for 22.919808938s
	I0510 19:27:51.320863  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	I0510 19:27:51.321154  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetIP
	I0510 19:27:51.324081  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:51.324459  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:51.324483  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:51.324692  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	I0510 19:27:51.325214  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	I0510 19:27:51.325408  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	I0510 19:27:51.325548  459268 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0510 19:27:51.325594  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:27:51.325646  459268 ssh_runner.go:195] Run: cat /version.json
	I0510 19:27:51.325681  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:27:51.328440  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:51.328753  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:51.328794  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:51.328818  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:51.329002  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHPort
	I0510 19:27:51.329194  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:51.329232  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:51.329255  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:51.329376  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHUsername
	I0510 19:27:51.329402  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHPort
	I0510 19:27:51.329568  459268 sshutil.go:53] new ssh client: &{IP:192.168.72.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/embed-certs-483140/id_rsa Username:docker}
	I0510 19:27:51.329584  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:51.329733  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHUsername
	I0510 19:27:51.329873  459268 sshutil.go:53] new ssh client: &{IP:192.168.72.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/embed-certs-483140/id_rsa Username:docker}
	I0510 19:27:51.446190  459268 ssh_runner.go:195] Run: systemctl --version
	I0510 19:27:51.452760  459268 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0510 19:27:51.607666  459268 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0510 19:27:51.616239  459268 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0510 19:27:51.616317  459268 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0510 19:27:51.636571  459268 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0510 19:27:51.636605  459268 start.go:495] detecting cgroup driver to use...
	I0510 19:27:51.636667  459268 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0510 19:27:51.657444  459268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0510 19:27:51.676562  459268 docker.go:225] disabling cri-docker service (if available) ...
	I0510 19:27:51.676630  459268 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0510 19:27:51.694731  459268 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0510 19:27:51.712216  459268 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0510 19:27:51.876386  459268 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0510 19:27:52.020882  459268 docker.go:241] disabling docker service ...
	I0510 19:27:52.020959  459268 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0510 19:27:52.037031  459268 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0510 19:27:52.051939  459268 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0510 19:27:52.242011  459268 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0510 19:27:52.396595  459268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0510 19:27:52.412573  459268 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0510 19:27:52.436314  459268 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0510 19:27:52.436382  459268 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:27:52.448707  459268 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0510 19:27:52.448775  459268 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:27:52.460614  459268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:27:52.472822  459268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:27:52.484913  459268 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0510 19:27:52.497971  459268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:27:52.511526  459268 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:27:52.533115  459268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:27:52.545947  459268 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0510 19:27:52.556778  459268 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0510 19:27:52.556857  459268 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0510 19:27:52.573550  459268 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0510 19:27:52.589299  459268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 19:27:52.732786  459268 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0510 19:27:52.860039  459268 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0510 19:27:52.860135  459268 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0510 19:27:52.865273  459268 start.go:563] Will wait 60s for crictl version
	I0510 19:27:52.865329  459268 ssh_runner.go:195] Run: which crictl
	I0510 19:27:52.869469  459268 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0510 19:27:52.910450  459268 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0510 19:27:52.910548  459268 ssh_runner.go:195] Run: crio --version
	I0510 19:27:52.940082  459268 ssh_runner.go:195] Run: crio --version
	I0510 19:27:52.972063  459268 out.go:177] * Preparing Kubernetes v1.33.0 on CRI-O 1.29.1 ...
	I0510 19:27:52.973307  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetIP
	I0510 19:27:52.976415  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:52.976789  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:52.976816  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:52.977066  459268 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0510 19:27:52.981433  459268 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 19:27:52.995881  459268 kubeadm.go:875] updating cluster {Name:embed-certs-483140 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.33.0 ClusterName:embed-certs-483140 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.231 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNode
Requested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0510 19:27:52.995991  459268 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
	I0510 19:27:52.996030  459268 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 19:27:53.034258  459268 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.33.0". assuming images are not preloaded.
	I0510 19:27:53.034325  459268 ssh_runner.go:195] Run: which lz4
	I0510 19:27:53.038628  459268 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0510 19:27:53.043283  459268 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0510 19:27:53.043322  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (413217622 bytes)
	I0510 19:27:49.067037  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:49.566942  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:50.066669  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:50.566620  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:51.066533  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:51.567303  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:52.066558  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:52.567193  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:53.066234  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:53.567160  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:54.704270  459268 crio.go:462] duration metric: took 1.665684843s to copy over tarball
	I0510 19:27:54.704390  459268 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0510 19:27:56.898604  459268 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.19418195s)
	I0510 19:27:56.898641  459268 crio.go:469] duration metric: took 2.194331535s to extract the tarball
	I0510 19:27:56.898653  459268 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0510 19:27:56.939194  459268 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 19:27:56.988274  459268 crio.go:514] all images are preloaded for cri-o runtime.
	I0510 19:27:56.988305  459268 cache_images.go:84] Images are preloaded, skipping loading
	I0510 19:27:56.988315  459268 kubeadm.go:926] updating node { 192.168.72.231 8443 v1.33.0 crio true true} ...
	I0510 19:27:56.988421  459268 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-483140 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.231
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.0 ClusterName:embed-certs-483140 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0510 19:27:56.988518  459268 ssh_runner.go:195] Run: crio config
	I0510 19:27:57.044585  459268 cni.go:84] Creating CNI manager for ""
	I0510 19:27:57.044616  459268 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0510 19:27:57.044632  459268 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0510 19:27:57.044674  459268 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.231 APIServerPort:8443 KubernetesVersion:v1.33.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-483140 NodeName:embed-certs-483140 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.231"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.231 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0510 19:27:57.044833  459268 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.231
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-483140"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.231"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.231"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0510 19:27:57.044929  459268 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.0
	I0510 19:27:57.057883  459268 binaries.go:44] Found k8s binaries, skipping transfer
	I0510 19:27:57.057964  459268 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0510 19:27:57.070669  459268 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0510 19:27:57.096191  459268 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0510 19:27:57.120219  459268 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I0510 19:27:57.143282  459268 ssh_runner.go:195] Run: grep 192.168.72.231	control-plane.minikube.internal$ /etc/hosts
	I0510 19:27:57.148049  459268 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.231	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 19:27:57.164188  459268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 19:27:57.307271  459268 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 19:27:57.342355  459268 certs.go:68] Setting up /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/embed-certs-483140 for IP: 192.168.72.231
	I0510 19:27:57.342381  459268 certs.go:194] generating shared ca certs ...
	I0510 19:27:57.342405  459268 certs.go:226] acquiring lock for ca certs: {Name:mk8db74782205da4ac57ef815dd495cda255251a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:27:57.342591  459268 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20720-388787/.minikube/ca.key
	I0510 19:27:57.342680  459268 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.key
	I0510 19:27:57.342697  459268 certs.go:256] generating profile certs ...
	I0510 19:27:57.342827  459268 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/embed-certs-483140/client.key
	I0510 19:27:57.342886  459268 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/embed-certs-483140/apiserver.key.027a75a8
	I0510 19:27:57.342922  459268 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/embed-certs-483140/proxy-client.key
	I0510 19:27:57.343035  459268 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980.pem (1338 bytes)
	W0510 19:27:57.343078  459268 certs.go:480] ignoring /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980_empty.pem, impossibly tiny 0 bytes
	I0510 19:27:57.343092  459268 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem (1679 bytes)
	I0510 19:27:57.343124  459268 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem (1078 bytes)
	I0510 19:27:57.343154  459268 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem (1123 bytes)
	I0510 19:27:57.343196  459268 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem (1675 bytes)
	I0510 19:27:57.343281  459268 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem (1708 bytes)
	I0510 19:27:57.343973  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0510 19:27:57.378887  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0510 19:27:57.420451  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0510 19:27:57.457206  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0510 19:27:57.499641  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/embed-certs-483140/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0510 19:27:57.534055  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/embed-certs-483140/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0510 19:27:57.564979  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/embed-certs-483140/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0510 19:27:57.601743  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/embed-certs-483140/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0510 19:27:57.633117  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980.pem --> /usr/share/ca-certificates/395980.pem (1338 bytes)
	I0510 19:27:57.664410  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem --> /usr/share/ca-certificates/3959802.pem (1708 bytes)
	I0510 19:27:57.693525  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0510 19:27:57.723750  459268 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0510 19:27:57.745486  459268 ssh_runner.go:195] Run: openssl version
	I0510 19:27:57.752288  459268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/395980.pem && ln -fs /usr/share/ca-certificates/395980.pem /etc/ssl/certs/395980.pem"
	I0510 19:27:57.766087  459268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/395980.pem
	I0510 19:27:57.771459  459268 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 10 18:00 /usr/share/ca-certificates/395980.pem
	I0510 19:27:57.771521  459268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/395980.pem
	I0510 19:27:57.778642  459268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/395980.pem /etc/ssl/certs/51391683.0"
	I0510 19:27:57.792251  459268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3959802.pem && ln -fs /usr/share/ca-certificates/3959802.pem /etc/ssl/certs/3959802.pem"
	I0510 19:27:57.806097  459268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3959802.pem
	I0510 19:27:57.811543  459268 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 10 18:00 /usr/share/ca-certificates/3959802.pem
	I0510 19:27:57.811613  459268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3959802.pem
	I0510 19:27:57.818894  459268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3959802.pem /etc/ssl/certs/3ec20f2e.0"
	I0510 19:27:57.833637  459268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0510 19:27:57.848084  459268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0510 19:27:57.853506  459268 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 10 17:52 /usr/share/ca-certificates/minikubeCA.pem
	I0510 19:27:57.853569  459268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0510 19:27:57.861284  459268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0510 19:27:57.875248  459268 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0510 19:27:57.881000  459268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0510 19:27:57.889239  459268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0510 19:27:57.898408  459268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0510 19:27:57.907154  459268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0510 19:27:57.915654  459268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0510 19:27:57.924501  459268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0510 19:27:57.932344  459268 kubeadm.go:392] StartCluster: {Name:embed-certs-483140 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33
.0 ClusterName:embed-certs-483140 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.231 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 19:27:57.932450  459268 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0510 19:27:57.932515  459268 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0510 19:27:57.977038  459268 cri.go:89] found id: ""
	I0510 19:27:57.977121  459268 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0510 19:27:57.988821  459268 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0510 19:27:57.988856  459268 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0510 19:27:57.988917  459268 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0510 19:27:58.000862  459268 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0510 19:27:58.001626  459268 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-483140" does not appear in /home/jenkins/minikube-integration/20720-388787/kubeconfig
	I0510 19:27:58.001911  459268 kubeconfig.go:62] /home/jenkins/minikube-integration/20720-388787/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-483140" cluster setting kubeconfig missing "embed-certs-483140" context setting]
	I0510 19:27:58.002463  459268 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/kubeconfig: {Name:mk5ad7285fe4c17b2779ea6d5a539f101fe94797 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:27:58.012994  459268 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0510 19:27:58.026138  459268 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.72.231
	I0510 19:27:58.026178  459268 kubeadm.go:1152] stopping kube-system containers ...
	I0510 19:27:58.026192  459268 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0510 19:27:58.026251  459268 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0510 19:27:58.069294  459268 cri.go:89] found id: ""
	I0510 19:27:58.069376  459268 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0510 19:27:58.089295  459268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0510 19:27:58.101786  459268 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0510 19:27:58.101807  459268 kubeadm.go:157] found existing configuration files:
	
	I0510 19:27:58.101851  459268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0510 19:27:58.112987  459268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0510 19:27:58.113053  459268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0510 19:27:58.125239  459268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0510 19:27:58.137764  459268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0510 19:27:58.137828  459268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0510 19:27:58.150429  459268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0510 19:27:58.163051  459268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0510 19:27:58.163137  459268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0510 19:27:58.175159  459268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0510 19:27:58.186717  459268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0510 19:27:58.186792  459268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0510 19:27:58.200405  459268 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0510 19:27:58.214273  459268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:27:58.343615  459268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:27:54.066832  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:54.567225  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:55.067095  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:55.567141  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:56.066981  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:56.566711  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:57.066205  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:57.566404  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:58.067102  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:58.566428  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:59.367696  459268 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.024040496s)
	I0510 19:27:59.367731  459268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:27:59.640666  459268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:27:59.716214  459268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:27:59.797846  459268 api_server.go:52] waiting for apiserver process to appear ...
	I0510 19:27:59.797921  459268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:00.298404  459268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:00.798112  459268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:00.834727  459268 api_server.go:72] duration metric: took 1.036892245s to wait for apiserver process to appear ...
	I0510 19:28:00.834760  459268 api_server.go:88] waiting for apiserver healthz status ...
	I0510 19:28:00.834784  459268 api_server.go:253] Checking apiserver healthz at https://192.168.72.231:8443/healthz ...
	I0510 19:28:00.835339  459268 api_server.go:269] stopped: https://192.168.72.231:8443/healthz: Get "https://192.168.72.231:8443/healthz": dial tcp 192.168.72.231:8443: connect: connection refused
	I0510 19:28:01.334998  459268 api_server.go:253] Checking apiserver healthz at https://192.168.72.231:8443/healthz ...
	I0510 19:27:59.066475  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:59.567069  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:00.066988  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:00.566888  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:01.066769  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:01.566741  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:02.066555  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:02.566338  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:03.066492  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:03.567302  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:03.904035  459268 api_server.go:279] https://192.168.72.231:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0510 19:28:03.904079  459268 api_server.go:103] status: https://192.168.72.231:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0510 19:28:03.904097  459268 api_server.go:253] Checking apiserver healthz at https://192.168.72.231:8443/healthz ...
	I0510 19:28:03.956072  459268 api_server.go:279] https://192.168.72.231:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0510 19:28:03.956108  459268 api_server.go:103] status: https://192.168.72.231:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0510 19:28:04.335740  459268 api_server.go:253] Checking apiserver healthz at https://192.168.72.231:8443/healthz ...
	I0510 19:28:04.341381  459268 api_server.go:279] https://192.168.72.231:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0510 19:28:04.341410  459268 api_server.go:103] status: https://192.168.72.231:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0510 19:28:04.835035  459268 api_server.go:253] Checking apiserver healthz at https://192.168.72.231:8443/healthz ...
	I0510 19:28:04.843795  459268 api_server.go:279] https://192.168.72.231:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0510 19:28:04.843856  459268 api_server.go:103] status: https://192.168.72.231:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0510 19:28:05.335582  459268 api_server.go:253] Checking apiserver healthz at https://192.168.72.231:8443/healthz ...
	I0510 19:28:05.340256  459268 api_server.go:279] https://192.168.72.231:8443/healthz returned 200:
	ok
	I0510 19:28:05.348062  459268 api_server.go:141] control plane version: v1.33.0
	I0510 19:28:05.348092  459268 api_server.go:131] duration metric: took 4.513324632s to wait for apiserver health ...
	I0510 19:28:05.348102  459268 cni.go:84] Creating CNI manager for ""
	I0510 19:28:05.348108  459268 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0510 19:28:05.349901  459268 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0510 19:28:05.351199  459268 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0510 19:28:05.369532  459268 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0510 19:28:05.403896  459268 system_pods.go:43] waiting for kube-system pods to appear ...
	I0510 19:28:05.410420  459268 system_pods.go:59] 8 kube-system pods found
	I0510 19:28:05.410466  459268 system_pods.go:61] "coredns-674b8bbfcf-4ld9c" [2af71141-c2b9-4788-8dcf-19ae78077d83] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0510 19:28:05.410476  459268 system_pods.go:61] "etcd-embed-certs-483140" [18335556-d523-4f93-9975-36c6ec710b8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0510 19:28:05.410484  459268 system_pods.go:61] "kube-apiserver-embed-certs-483140" [ccfb56df-98d8-49bd-af84-4897349b90fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0510 19:28:05.410489  459268 system_pods.go:61] "kube-controller-manager-embed-certs-483140" [3aa74b28-d50d-4a50-b222-38dea567ed3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0510 19:28:05.410494  459268 system_pods.go:61] "kube-proxy-b2gvg" [d17e7a7f-57d3-4fe4-ace9-7a2fc70bb585] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0510 19:28:05.410500  459268 system_pods.go:61] "kube-scheduler-embed-certs-483140" [1eb4348b-46a3-45d6-bd78-d5d9045b600c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0510 19:28:05.410505  459268 system_pods.go:61] "metrics-server-f79f97bbb-dbl7q" [b17e1431-b05d-4d16-8f92-46b9526e09fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0510 19:28:05.410510  459268 system_pods.go:61] "storage-provisioner" [e9b8f9e8-8add-47f3-a9a7-51fae3a958d5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0510 19:28:05.410519  459268 system_pods.go:74] duration metric: took 6.592608ms to wait for pod list to return data ...
	I0510 19:28:05.410530  459268 node_conditions.go:102] verifying NodePressure condition ...
	I0510 19:28:05.415787  459268 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0510 19:28:05.415827  459268 node_conditions.go:123] node cpu capacity is 2
	I0510 19:28:05.415843  459268 node_conditions.go:105] duration metric: took 5.307579ms to run NodePressure ...
	I0510 19:28:05.415868  459268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:28:05.791590  459268 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0510 19:28:05.795260  459268 kubeadm.go:735] kubelet initialised
	I0510 19:28:05.795284  459268 kubeadm.go:736] duration metric: took 3.665992ms waiting for restarted kubelet to initialise ...
	I0510 19:28:05.795305  459268 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0510 19:28:05.811911  459268 ops.go:34] apiserver oom_adj: -16
	I0510 19:28:05.811945  459268 kubeadm.go:593] duration metric: took 7.823080185s to restartPrimaryControlPlane
	I0510 19:28:05.811959  459268 kubeadm.go:394] duration metric: took 7.879628572s to StartCluster
	I0510 19:28:05.811982  459268 settings.go:142] acquiring lock: {Name:mk4ab6a112c947bfdedd8044017a7c560266fb5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:28:05.812070  459268 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20720-388787/kubeconfig
	I0510 19:28:05.813672  459268 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/kubeconfig: {Name:mk5ad7285fe4c17b2779ea6d5a539f101fe94797 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:28:05.814006  459268 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.231 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0510 19:28:05.814204  459268 config.go:182] Loaded profile config "embed-certs-483140": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 19:28:05.814159  459268 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0510 19:28:05.814258  459268 addons.go:69] Setting default-storageclass=true in profile "embed-certs-483140"
	I0510 19:28:05.814274  459268 addons.go:69] Setting dashboard=true in profile "embed-certs-483140"
	I0510 19:28:05.814258  459268 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-483140"
	I0510 19:28:05.814294  459268 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-483140"
	I0510 19:28:05.814286  459268 addons.go:69] Setting metrics-server=true in profile "embed-certs-483140"
	W0510 19:28:05.814306  459268 addons.go:247] addon storage-provisioner should already be in state true
	I0510 19:28:05.814315  459268 addons.go:238] Setting addon metrics-server=true in "embed-certs-483140"
	W0510 19:28:05.814323  459268 addons.go:247] addon metrics-server should already be in state true
	I0510 19:28:05.814336  459268 host.go:66] Checking if "embed-certs-483140" exists ...
	I0510 19:28:05.814357  459268 host.go:66] Checking if "embed-certs-483140" exists ...
	I0510 19:28:05.814279  459268 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-483140"
	I0510 19:28:05.814296  459268 addons.go:238] Setting addon dashboard=true in "embed-certs-483140"
	W0510 19:28:05.814480  459268 addons.go:247] addon dashboard should already be in state true
	I0510 19:28:05.814522  459268 host.go:66] Checking if "embed-certs-483140" exists ...
	I0510 19:28:05.814752  459268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:28:05.814784  459268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:28:05.814801  459268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:28:05.814812  459268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:28:05.814858  459268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:28:05.814903  459268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:28:05.814860  459268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:28:05.815049  459268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:28:05.815493  459268 out.go:177] * Verifying Kubernetes components...
	I0510 19:28:05.816761  459268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 19:28:05.832190  459268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34921
	I0510 19:28:05.833019  459268 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:28:05.833618  459268 main.go:141] libmachine: Using API Version  1
	I0510 19:28:05.833652  459268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:28:05.834069  459268 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:28:05.834652  459268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:28:05.834698  459268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:28:05.835356  459268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39063
	I0510 19:28:05.835412  459268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36825
	I0510 19:28:05.835824  459268 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:28:05.835909  459268 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:28:05.836388  459268 main.go:141] libmachine: Using API Version  1
	I0510 19:28:05.836411  459268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:28:05.836524  459268 main.go:141] libmachine: Using API Version  1
	I0510 19:28:05.836544  459268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:28:05.836805  459268 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:28:05.836925  459268 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:28:05.837086  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetState
	I0510 19:28:05.837502  459268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:28:05.837542  459268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:28:05.837861  459268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45851
	I0510 19:28:05.838446  459268 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:28:05.838949  459268 main.go:141] libmachine: Using API Version  1
	I0510 19:28:05.838974  459268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:28:05.839356  459268 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:28:05.840781  459268 addons.go:238] Setting addon default-storageclass=true in "embed-certs-483140"
	W0510 19:28:05.840809  459268 addons.go:247] addon default-storageclass should already be in state true
	I0510 19:28:05.840843  459268 host.go:66] Checking if "embed-certs-483140" exists ...
	I0510 19:28:05.841225  459268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:28:05.841283  459268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:28:05.841904  459268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:28:05.841957  459268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:28:05.855806  459268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38611
	I0510 19:28:05.856498  459268 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:28:05.857301  459268 main.go:141] libmachine: Using API Version  1
	I0510 19:28:05.857333  459268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:28:05.857754  459268 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:28:05.857831  459268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39121
	I0510 19:28:05.857977  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetState
	I0510 19:28:05.858290  459268 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:28:05.858779  459268 main.go:141] libmachine: Using API Version  1
	I0510 19:28:05.858803  459268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:28:05.858874  459268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38033
	I0510 19:28:05.859327  459268 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:28:05.859538  459268 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:28:05.859968  459268 main.go:141] libmachine: Using API Version  1
	I0510 19:28:05.859992  459268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:28:05.860232  459268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:28:05.860241  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	I0510 19:28:05.860273  459268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:28:05.860355  459268 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:28:05.860496  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetState
	I0510 19:28:05.862204  459268 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0510 19:28:05.862302  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	I0510 19:28:05.863409  459268 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0510 19:28:05.863496  459268 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0510 19:28:05.863512  459268 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0510 19:28:05.863528  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:28:05.864433  459268 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0510 19:28:05.864458  459268 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0510 19:28:05.864480  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:28:05.867368  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:28:05.867845  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:28:05.867993  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:28:05.868025  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:28:05.868296  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHPort
	I0510 19:28:05.868504  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:28:05.868556  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:28:05.868574  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:28:05.868691  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHUsername
	I0510 19:28:05.868814  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHPort
	I0510 19:28:05.868850  459268 sshutil.go:53] new ssh client: &{IP:192.168.72.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/embed-certs-483140/id_rsa Username:docker}
	I0510 19:28:05.868996  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:28:05.869204  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHUsername
	I0510 19:28:05.869389  459268 sshutil.go:53] new ssh client: &{IP:192.168.72.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/embed-certs-483140/id_rsa Username:docker}
	I0510 19:28:05.883698  459268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46855
	I0510 19:28:05.884370  459268 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:28:05.884927  459268 main.go:141] libmachine: Using API Version  1
	I0510 19:28:05.884961  459268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:28:05.885393  459268 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:28:05.885620  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetState
	I0510 19:28:05.887679  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	I0510 19:28:05.889699  459268 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0510 19:28:05.889946  459268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35865
	I0510 19:28:05.890351  459268 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:28:05.890843  459268 main.go:141] libmachine: Using API Version  1
	I0510 19:28:05.890898  459268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:28:05.891281  459268 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:28:05.891485  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetState
	I0510 19:28:05.891961  459268 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0510 19:28:05.893147  459268 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0510 19:28:05.893168  459268 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0510 19:28:05.893173  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	I0510 19:28:05.893192  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:28:05.893397  459268 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0510 19:28:05.893412  459268 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0510 19:28:05.893429  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:28:05.897062  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:28:05.897408  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:28:05.897473  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:28:05.897574  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:28:05.897702  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHPort
	I0510 19:28:05.897846  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:28:05.897995  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHUsername
	I0510 19:28:05.898008  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:28:05.898040  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:28:05.898173  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHPort
	I0510 19:28:05.898163  459268 sshutil.go:53] new ssh client: &{IP:192.168.72.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/embed-certs-483140/id_rsa Username:docker}
	I0510 19:28:05.898334  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:28:05.898489  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHUsername
	I0510 19:28:05.898590  459268 sshutil.go:53] new ssh client: &{IP:192.168.72.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/embed-certs-483140/id_rsa Username:docker}
	I0510 19:28:06.110607  459268 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 19:28:06.144859  459268 node_ready.go:35] waiting up to 6m0s for node "embed-certs-483140" to be "Ready" ...
	I0510 19:28:06.150324  459268 node_ready.go:49] node "embed-certs-483140" is "Ready"
	I0510 19:28:06.150351  459268 node_ready.go:38] duration metric: took 5.421565ms for node "embed-certs-483140" to be "Ready" ...
	I0510 19:28:06.150364  459268 api_server.go:52] waiting for apiserver process to appear ...
	I0510 19:28:06.150417  459268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:06.172762  459268 api_server.go:72] duration metric: took 358.714749ms to wait for apiserver process to appear ...
	I0510 19:28:06.172794  459268 api_server.go:88] waiting for apiserver healthz status ...
	I0510 19:28:06.172815  459268 api_server.go:253] Checking apiserver healthz at https://192.168.72.231:8443/healthz ...
	I0510 19:28:06.181737  459268 api_server.go:279] https://192.168.72.231:8443/healthz returned 200:
	ok
	I0510 19:28:06.183824  459268 api_server.go:141] control plane version: v1.33.0
	I0510 19:28:06.183848  459268 api_server.go:131] duration metric: took 11.047783ms to wait for apiserver health ...
	I0510 19:28:06.183857  459268 system_pods.go:43] waiting for kube-system pods to appear ...
	I0510 19:28:06.188111  459268 system_pods.go:59] 8 kube-system pods found
	I0510 19:28:06.188145  459268 system_pods.go:61] "coredns-674b8bbfcf-4ld9c" [2af71141-c2b9-4788-8dcf-19ae78077d83] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0510 19:28:06.188156  459268 system_pods.go:61] "etcd-embed-certs-483140" [18335556-d523-4f93-9975-36c6ec710b8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0510 19:28:06.188168  459268 system_pods.go:61] "kube-apiserver-embed-certs-483140" [ccfb56df-98d8-49bd-af84-4897349b90fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0510 19:28:06.188177  459268 system_pods.go:61] "kube-controller-manager-embed-certs-483140" [3aa74b28-d50d-4a50-b222-38dea567ed3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0510 19:28:06.188184  459268 system_pods.go:61] "kube-proxy-b2gvg" [d17e7a7f-57d3-4fe4-ace9-7a2fc70bb585] Running
	I0510 19:28:06.188195  459268 system_pods.go:61] "kube-scheduler-embed-certs-483140" [1eb4348b-46a3-45d6-bd78-d5d9045b600c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0510 19:28:06.188214  459268 system_pods.go:61] "metrics-server-f79f97bbb-dbl7q" [b17e1431-b05d-4d16-8f92-46b9526e09fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0510 19:28:06.188220  459268 system_pods.go:61] "storage-provisioner" [e9b8f9e8-8add-47f3-a9a7-51fae3a958d5] Running
	I0510 19:28:06.188231  459268 system_pods.go:74] duration metric: took 4.368046ms to wait for pod list to return data ...
	I0510 19:28:06.188242  459268 default_sa.go:34] waiting for default service account to be created ...
	I0510 19:28:06.193811  459268 default_sa.go:45] found service account: "default"
	I0510 19:28:06.193846  459268 default_sa.go:55] duration metric: took 5.591706ms for default service account to be created ...
	I0510 19:28:06.193860  459268 system_pods.go:116] waiting for k8s-apps to be running ...
	I0510 19:28:06.200177  459268 system_pods.go:86] 8 kube-system pods found
	I0510 19:28:06.200220  459268 system_pods.go:89] "coredns-674b8bbfcf-4ld9c" [2af71141-c2b9-4788-8dcf-19ae78077d83] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0510 19:28:06.200233  459268 system_pods.go:89] "etcd-embed-certs-483140" [18335556-d523-4f93-9975-36c6ec710b8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0510 19:28:06.200244  459268 system_pods.go:89] "kube-apiserver-embed-certs-483140" [ccfb56df-98d8-49bd-af84-4897349b90fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0510 19:28:06.200254  459268 system_pods.go:89] "kube-controller-manager-embed-certs-483140" [3aa74b28-d50d-4a50-b222-38dea567ed3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0510 19:28:06.200260  459268 system_pods.go:89] "kube-proxy-b2gvg" [d17e7a7f-57d3-4fe4-ace9-7a2fc70bb585] Running
	I0510 19:28:06.200268  459268 system_pods.go:89] "kube-scheduler-embed-certs-483140" [1eb4348b-46a3-45d6-bd78-d5d9045b600c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0510 19:28:06.200276  459268 system_pods.go:89] "metrics-server-f79f97bbb-dbl7q" [b17e1431-b05d-4d16-8f92-46b9526e09fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0510 19:28:06.200282  459268 system_pods.go:89] "storage-provisioner" [e9b8f9e8-8add-47f3-a9a7-51fae3a958d5] Running
	I0510 19:28:06.200291  459268 system_pods.go:126] duration metric: took 6.423763ms to wait for k8s-apps to be running ...
	I0510 19:28:06.200300  459268 system_svc.go:44] waiting for kubelet service to be running ....
	I0510 19:28:06.200370  459268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0510 19:28:06.223314  459268 system_svc.go:56] duration metric: took 22.998023ms WaitForService to wait for kubelet
	I0510 19:28:06.223354  459268 kubeadm.go:578] duration metric: took 409.308651ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0510 19:28:06.223387  459268 node_conditions.go:102] verifying NodePressure condition ...
	I0510 19:28:06.232818  459268 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0510 19:28:06.232856  459268 node_conditions.go:123] node cpu capacity is 2
	I0510 19:28:06.232872  459268 node_conditions.go:105] duration metric: took 9.479043ms to run NodePressure ...
	I0510 19:28:06.232902  459268 start.go:241] waiting for startup goroutines ...
	I0510 19:28:06.266649  459268 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0510 19:28:06.266685  459268 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0510 19:28:06.302650  459268 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0510 19:28:06.334925  459268 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0510 19:28:06.334968  459268 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0510 19:28:06.361227  459268 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0510 19:28:06.415256  459268 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0510 19:28:06.415296  459268 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0510 19:28:06.419004  459268 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0510 19:28:06.419036  459268 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0510 19:28:06.550056  459268 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0510 19:28:06.550095  459268 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0510 19:28:06.551403  459268 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0510 19:28:06.551436  459268 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0510 19:28:06.652695  459268 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0510 19:28:06.652723  459268 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0510 19:28:06.732300  459268 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0510 19:28:06.732329  459268 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0510 19:28:06.812826  459268 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0510 19:28:06.812859  459268 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0510 19:28:06.814831  459268 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0510 19:28:06.941859  459268 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0510 19:28:06.941910  459268 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0510 19:28:07.112650  459268 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0510 19:28:07.112683  459268 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0510 19:28:07.230569  459268 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0510 19:28:07.230606  459268 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0510 19:28:07.348026  459268 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0510 19:28:08.311112  459268 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.008411221s)
	I0510 19:28:08.311190  459268 main.go:141] libmachine: Making call to close driver server
	I0510 19:28:08.311196  459268 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.949932076s)
	I0510 19:28:08.311207  459268 main.go:141] libmachine: (embed-certs-483140) Calling .Close
	I0510 19:28:08.311253  459268 main.go:141] libmachine: Making call to close driver server
	I0510 19:28:08.311374  459268 main.go:141] libmachine: (embed-certs-483140) Calling .Close
	I0510 19:28:08.311588  459268 main.go:141] libmachine: Successfully made call to close driver server
	I0510 19:28:08.311605  459268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 19:28:08.311650  459268 main.go:141] libmachine: (embed-certs-483140) DBG | Closing plugin on server side
	I0510 19:28:08.311673  459268 main.go:141] libmachine: Successfully made call to close driver server
	I0510 19:28:08.311684  459268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 19:28:08.311686  459268 main.go:141] libmachine: (embed-certs-483140) DBG | Closing plugin on server side
	I0510 19:28:08.311693  459268 main.go:141] libmachine: Making call to close driver server
	I0510 19:28:08.311701  459268 main.go:141] libmachine: (embed-certs-483140) Calling .Close
	I0510 19:28:08.311749  459268 main.go:141] libmachine: Making call to close driver server
	I0510 19:28:08.311769  459268 main.go:141] libmachine: (embed-certs-483140) Calling .Close
	I0510 19:28:08.311934  459268 main.go:141] libmachine: Successfully made call to close driver server
	I0510 19:28:08.311961  459268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 19:28:08.313225  459268 main.go:141] libmachine: (embed-certs-483140) DBG | Closing plugin on server side
	I0510 19:28:08.313491  459268 main.go:141] libmachine: Successfully made call to close driver server
	I0510 19:28:08.313506  459268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 19:28:08.331318  459268 main.go:141] libmachine: Making call to close driver server
	I0510 19:28:08.331353  459268 main.go:141] libmachine: (embed-certs-483140) Calling .Close
	I0510 19:28:08.331610  459268 main.go:141] libmachine: (embed-certs-483140) DBG | Closing plugin on server side
	I0510 19:28:08.331656  459268 main.go:141] libmachine: Successfully made call to close driver server
	I0510 19:28:08.331664  459268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 19:28:08.561201  459268 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.746324825s)
	I0510 19:28:08.561271  459268 main.go:141] libmachine: Making call to close driver server
	I0510 19:28:08.561285  459268 main.go:141] libmachine: (embed-certs-483140) Calling .Close
	I0510 19:28:08.561649  459268 main.go:141] libmachine: Successfully made call to close driver server
	I0510 19:28:08.561672  459268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 19:28:08.561690  459268 main.go:141] libmachine: Making call to close driver server
	I0510 19:28:08.561698  459268 main.go:141] libmachine: (embed-certs-483140) Calling .Close
	I0510 19:28:08.562030  459268 main.go:141] libmachine: (embed-certs-483140) DBG | Closing plugin on server side
	I0510 19:28:08.562077  459268 main.go:141] libmachine: Successfully made call to close driver server
	I0510 19:28:08.562088  459268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 19:28:08.562103  459268 addons.go:479] Verifying addon metrics-server=true in "embed-certs-483140"
	I0510 19:28:04.066752  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:04.567029  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:05.066242  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:05.567101  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:06.066378  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:06.566985  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:07.066671  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:07.566514  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:08.067086  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:08.566885  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:09.320104  459268 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.972016021s)
	I0510 19:28:09.320180  459268 main.go:141] libmachine: Making call to close driver server
	I0510 19:28:09.320206  459268 main.go:141] libmachine: (embed-certs-483140) Calling .Close
	I0510 19:28:09.320585  459268 main.go:141] libmachine: (embed-certs-483140) DBG | Closing plugin on server side
	I0510 19:28:09.320633  459268 main.go:141] libmachine: Successfully made call to close driver server
	I0510 19:28:09.320643  459268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 19:28:09.320652  459268 main.go:141] libmachine: Making call to close driver server
	I0510 19:28:09.320660  459268 main.go:141] libmachine: (embed-certs-483140) Calling .Close
	I0510 19:28:09.320941  459268 main.go:141] libmachine: (embed-certs-483140) DBG | Closing plugin on server side
	I0510 19:28:09.320962  459268 main.go:141] libmachine: Successfully made call to close driver server
	I0510 19:28:09.320975  459268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 19:28:09.323341  459268 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-483140 addons enable metrics-server
	
	I0510 19:28:09.324636  459268 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0510 19:28:09.325664  459268 addons.go:514] duration metric: took 3.511519103s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0510 19:28:09.325722  459268 start.go:246] waiting for cluster config update ...
	I0510 19:28:09.325741  459268 start.go:255] writing updated cluster config ...
	I0510 19:28:09.326092  459268 ssh_runner.go:195] Run: rm -f paused
	I0510 19:28:09.344642  459268 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0510 19:28:09.354144  459268 pod_ready.go:83] waiting for pod "coredns-674b8bbfcf-4ld9c" in "kube-system" namespace to be "Ready" or be gone ...
	W0510 19:28:11.360637  459268 pod_ready.go:104] pod "coredns-674b8bbfcf-4ld9c" is not "Ready", error: <nil>
	W0510 19:28:13.860282  459268 pod_ready.go:104] pod "coredns-674b8bbfcf-4ld9c" is not "Ready", error: <nil>
	I0510 19:28:09.066763  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:09.566992  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:10.066908  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:10.566843  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:11.066514  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:11.566388  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:12.066218  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:12.566934  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:13.066645  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:13.567085  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0510 19:28:15.860630  459268 pod_ready.go:104] pod "coredns-674b8bbfcf-4ld9c" is not "Ready", error: <nil>
	I0510 19:28:17.393207  459268 pod_ready.go:94] pod "coredns-674b8bbfcf-4ld9c" is "Ready"
	I0510 19:28:17.393237  459268 pod_ready.go:86] duration metric: took 8.039060776s for pod "coredns-674b8bbfcf-4ld9c" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:28:17.418993  459268 pod_ready.go:83] waiting for pod "etcd-embed-certs-483140" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:28:17.429049  459268 pod_ready.go:94] pod "etcd-embed-certs-483140" is "Ready"
	I0510 19:28:17.429081  459268 pod_ready.go:86] duration metric: took 10.055799ms for pod "etcd-embed-certs-483140" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:28:17.432083  459268 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-483140" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:28:17.437554  459268 pod_ready.go:94] pod "kube-apiserver-embed-certs-483140" is "Ready"
	I0510 19:28:17.437591  459268 pod_ready.go:86] duration metric: took 5.476778ms for pod "kube-apiserver-embed-certs-483140" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:28:17.440334  459268 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-483140" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:28:17.557594  459268 pod_ready.go:94] pod "kube-controller-manager-embed-certs-483140" is "Ready"
	I0510 19:28:17.557622  459268 pod_ready.go:86] duration metric: took 117.264734ms for pod "kube-controller-manager-embed-certs-483140" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:28:17.769743  459268 pod_ready.go:83] waiting for pod "kube-proxy-b2gvg" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:28:18.158013  459268 pod_ready.go:94] pod "kube-proxy-b2gvg" is "Ready"
	I0510 19:28:18.158042  459268 pod_ready.go:86] duration metric: took 388.270745ms for pod "kube-proxy-b2gvg" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:28:18.379133  459268 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-483140" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:28:18.758017  459268 pod_ready.go:94] pod "kube-scheduler-embed-certs-483140" is "Ready"
	I0510 19:28:18.758052  459268 pod_ready.go:86] duration metric: took 378.881401ms for pod "kube-scheduler-embed-certs-483140" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:28:18.758067  459268 pod_ready.go:40] duration metric: took 9.413376926s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0510 19:28:18.804476  459268 start.go:607] kubectl: 1.33.0, cluster: 1.33.0 (minor skew: 0)
	I0510 19:28:18.807325  459268 out.go:177] * Done! kubectl is now configured to use "embed-certs-483140" cluster and "default" namespace by default
	I0510 19:28:14.066994  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:14.567064  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:15.066411  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:15.567220  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:16.067320  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:16.566859  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:17.066625  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:17.566521  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:18.066671  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:18.566592  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:19.066253  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:19.566860  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:20.066367  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:20.567118  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:21.067193  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:21.566937  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:22.066333  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:22.567056  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:23.066988  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:23.566331  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:24.066265  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:24.566513  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:25.067048  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:25.567212  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:26.067158  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:26.566324  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:27.066325  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:27.566435  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:28.067014  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:28.566560  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:29.066490  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:29.567080  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:30.067132  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:30.566495  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:31.066973  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:31.566321  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:32.067212  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:32.566665  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:33.066716  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:33.566326  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:34.067017  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:34.566429  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:35.067039  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:35.566936  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:36.066553  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:36.566402  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:37.066800  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:37.566267  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:38.066188  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:38.567060  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:28:38.567180  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:28:38.614003  459056 cri.go:89] found id: ""
	I0510 19:28:38.614094  459056 logs.go:282] 0 containers: []
	W0510 19:28:38.614120  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:28:38.614132  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:28:38.614211  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:28:38.651000  459056 cri.go:89] found id: ""
	I0510 19:28:38.651034  459056 logs.go:282] 0 containers: []
	W0510 19:28:38.651046  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:28:38.651053  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:28:38.651121  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:28:38.688211  459056 cri.go:89] found id: ""
	I0510 19:28:38.688238  459056 logs.go:282] 0 containers: []
	W0510 19:28:38.688246  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:28:38.688252  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:28:38.688318  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:28:38.726904  459056 cri.go:89] found id: ""
	I0510 19:28:38.726933  459056 logs.go:282] 0 containers: []
	W0510 19:28:38.726953  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:28:38.726963  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:28:38.727020  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:28:38.764293  459056 cri.go:89] found id: ""
	I0510 19:28:38.764321  459056 logs.go:282] 0 containers: []
	W0510 19:28:38.764330  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:28:38.764335  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:28:38.764390  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:28:38.802044  459056 cri.go:89] found id: ""
	I0510 19:28:38.802075  459056 logs.go:282] 0 containers: []
	W0510 19:28:38.802083  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:28:38.802104  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:28:38.802160  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:28:38.840951  459056 cri.go:89] found id: ""
	I0510 19:28:38.840991  459056 logs.go:282] 0 containers: []
	W0510 19:28:38.841002  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:28:38.841010  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:28:38.841098  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:28:38.879478  459056 cri.go:89] found id: ""
	I0510 19:28:38.879514  459056 logs.go:282] 0 containers: []
	W0510 19:28:38.879522  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:28:38.879533  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:28:38.879548  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:28:38.932148  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:28:38.932193  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:28:38.947813  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:28:38.947845  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:28:39.094230  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:28:39.094264  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:28:39.094283  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:28:39.170356  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:28:39.170406  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:28:41.716545  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:41.734713  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:28:41.734791  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:28:41.772135  459056 cri.go:89] found id: ""
	I0510 19:28:41.772178  459056 logs.go:282] 0 containers: []
	W0510 19:28:41.772187  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:28:41.772193  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:28:41.772246  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:28:41.810841  459056 cri.go:89] found id: ""
	I0510 19:28:41.810875  459056 logs.go:282] 0 containers: []
	W0510 19:28:41.810886  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:28:41.810893  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:28:41.810969  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:28:41.848600  459056 cri.go:89] found id: ""
	I0510 19:28:41.848627  459056 logs.go:282] 0 containers: []
	W0510 19:28:41.848636  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:28:41.848643  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:28:41.848735  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:28:41.887214  459056 cri.go:89] found id: ""
	I0510 19:28:41.887261  459056 logs.go:282] 0 containers: []
	W0510 19:28:41.887273  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:28:41.887282  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:28:41.887353  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:28:41.926422  459056 cri.go:89] found id: ""
	I0510 19:28:41.926455  459056 logs.go:282] 0 containers: []
	W0510 19:28:41.926466  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:28:41.926474  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:28:41.926573  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:28:41.963547  459056 cri.go:89] found id: ""
	I0510 19:28:41.963582  459056 logs.go:282] 0 containers: []
	W0510 19:28:41.963595  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:28:41.963625  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:28:41.963699  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:28:42.007903  459056 cri.go:89] found id: ""
	I0510 19:28:42.007930  459056 logs.go:282] 0 containers: []
	W0510 19:28:42.007938  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:28:42.007943  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:28:42.007996  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:28:42.048020  459056 cri.go:89] found id: ""
	I0510 19:28:42.048054  459056 logs.go:282] 0 containers: []
	W0510 19:28:42.048062  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:28:42.048072  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:28:42.048085  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:28:42.099210  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:28:42.099267  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:28:42.114915  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:28:42.114947  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:28:42.196330  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:28:42.196364  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:28:42.196380  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:28:42.278729  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:28:42.278786  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:28:44.825880  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:44.844164  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:28:44.844258  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:28:44.883963  459056 cri.go:89] found id: ""
	I0510 19:28:44.883992  459056 logs.go:282] 0 containers: []
	W0510 19:28:44.884001  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:28:44.884008  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:28:44.884085  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:28:44.920183  459056 cri.go:89] found id: ""
	I0510 19:28:44.920214  459056 logs.go:282] 0 containers: []
	W0510 19:28:44.920222  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:28:44.920228  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:28:44.920304  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:28:44.956038  459056 cri.go:89] found id: ""
	I0510 19:28:44.956072  459056 logs.go:282] 0 containers: []
	W0510 19:28:44.956087  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:28:44.956093  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:28:44.956165  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:28:44.992412  459056 cri.go:89] found id: ""
	I0510 19:28:44.992448  459056 logs.go:282] 0 containers: []
	W0510 19:28:44.992460  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:28:44.992468  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:28:44.992540  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:28:45.029970  459056 cri.go:89] found id: ""
	I0510 19:28:45.030008  459056 logs.go:282] 0 containers: []
	W0510 19:28:45.030020  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:28:45.030027  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:28:45.030097  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:28:45.065606  459056 cri.go:89] found id: ""
	I0510 19:28:45.065643  459056 logs.go:282] 0 containers: []
	W0510 19:28:45.065654  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:28:45.065662  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:28:45.065736  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:28:45.102978  459056 cri.go:89] found id: ""
	I0510 19:28:45.103009  459056 logs.go:282] 0 containers: []
	W0510 19:28:45.103018  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:28:45.103024  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:28:45.103087  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:28:45.143725  459056 cri.go:89] found id: ""
	I0510 19:28:45.143752  459056 logs.go:282] 0 containers: []
	W0510 19:28:45.143761  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:28:45.143771  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:28:45.143783  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:28:45.187406  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:28:45.187443  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:28:45.237672  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:28:45.237725  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:28:45.253387  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:28:45.253425  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:28:45.326218  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:28:45.326246  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:28:45.326265  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:28:47.904696  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:47.922232  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:28:47.922326  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:28:47.964247  459056 cri.go:89] found id: ""
	I0510 19:28:47.964284  459056 logs.go:282] 0 containers: []
	W0510 19:28:47.964293  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:28:47.964299  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:28:47.964358  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:28:48.001130  459056 cri.go:89] found id: ""
	I0510 19:28:48.001159  459056 logs.go:282] 0 containers: []
	W0510 19:28:48.001167  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:28:48.001175  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:28:48.001245  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:28:48.038486  459056 cri.go:89] found id: ""
	I0510 19:28:48.038519  459056 logs.go:282] 0 containers: []
	W0510 19:28:48.038528  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:28:48.038534  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:28:48.038604  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:28:48.073594  459056 cri.go:89] found id: ""
	I0510 19:28:48.073628  459056 logs.go:282] 0 containers: []
	W0510 19:28:48.073636  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:28:48.073643  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:28:48.073716  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:28:48.113159  459056 cri.go:89] found id: ""
	I0510 19:28:48.113191  459056 logs.go:282] 0 containers: []
	W0510 19:28:48.113199  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:28:48.113205  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:28:48.113271  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:28:48.158534  459056 cri.go:89] found id: ""
	I0510 19:28:48.158570  459056 logs.go:282] 0 containers: []
	W0510 19:28:48.158581  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:28:48.158589  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:28:48.158661  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:28:48.194840  459056 cri.go:89] found id: ""
	I0510 19:28:48.194871  459056 logs.go:282] 0 containers: []
	W0510 19:28:48.194883  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:28:48.194889  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:28:48.194952  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:28:48.233411  459056 cri.go:89] found id: ""
	I0510 19:28:48.233446  459056 logs.go:282] 0 containers: []
	W0510 19:28:48.233455  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:28:48.233465  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:28:48.233481  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:28:48.248955  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:28:48.248988  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:28:48.321462  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:28:48.321486  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:28:48.321499  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:28:48.413091  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:28:48.413139  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:28:48.455370  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:28:48.455417  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:28:51.008549  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:51.026088  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:28:51.026175  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:28:51.065801  459056 cri.go:89] found id: ""
	I0510 19:28:51.065834  459056 logs.go:282] 0 containers: []
	W0510 19:28:51.065844  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:28:51.065850  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:28:51.065915  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:28:51.108971  459056 cri.go:89] found id: ""
	I0510 19:28:51.109002  459056 logs.go:282] 0 containers: []
	W0510 19:28:51.109010  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:28:51.109017  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:28:51.109081  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:28:51.153399  459056 cri.go:89] found id: ""
	I0510 19:28:51.153425  459056 logs.go:282] 0 containers: []
	W0510 19:28:51.153434  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:28:51.153440  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:28:51.153501  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:28:51.193120  459056 cri.go:89] found id: ""
	I0510 19:28:51.193150  459056 logs.go:282] 0 containers: []
	W0510 19:28:51.193159  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:28:51.193165  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:28:51.193219  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:28:51.232126  459056 cri.go:89] found id: ""
	I0510 19:28:51.232160  459056 logs.go:282] 0 containers: []
	W0510 19:28:51.232169  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:28:51.232176  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:28:51.232262  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:28:51.271265  459056 cri.go:89] found id: ""
	I0510 19:28:51.271292  459056 logs.go:282] 0 containers: []
	W0510 19:28:51.271300  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:28:51.271306  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:28:51.271380  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:28:51.314653  459056 cri.go:89] found id: ""
	I0510 19:28:51.314687  459056 logs.go:282] 0 containers: []
	W0510 19:28:51.314698  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:28:51.314710  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:28:51.314788  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:28:51.353697  459056 cri.go:89] found id: ""
	I0510 19:28:51.353726  459056 logs.go:282] 0 containers: []
	W0510 19:28:51.353734  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:28:51.353746  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:28:51.353762  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:28:51.406474  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:28:51.406515  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:28:51.423057  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:28:51.423092  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:28:51.501527  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:28:51.501551  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:28:51.501563  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:28:51.582228  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:28:51.582278  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:28:54.132967  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:54.161653  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:28:54.161729  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:28:54.201063  459056 cri.go:89] found id: ""
	I0510 19:28:54.201098  459056 logs.go:282] 0 containers: []
	W0510 19:28:54.201111  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:28:54.201120  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:28:54.201200  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:28:54.241268  459056 cri.go:89] found id: ""
	I0510 19:28:54.241298  459056 logs.go:282] 0 containers: []
	W0510 19:28:54.241307  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:28:54.241320  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:28:54.241388  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:28:54.279508  459056 cri.go:89] found id: ""
	I0510 19:28:54.279540  459056 logs.go:282] 0 containers: []
	W0510 19:28:54.279549  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:28:54.279555  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:28:54.279621  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:28:54.322256  459056 cri.go:89] found id: ""
	I0510 19:28:54.322295  459056 logs.go:282] 0 containers: []
	W0510 19:28:54.322306  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:28:54.322349  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:28:54.322423  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:28:54.360014  459056 cri.go:89] found id: ""
	I0510 19:28:54.360051  459056 logs.go:282] 0 containers: []
	W0510 19:28:54.360062  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:28:54.360071  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:28:54.360149  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:28:54.399429  459056 cri.go:89] found id: ""
	I0510 19:28:54.399462  459056 logs.go:282] 0 containers: []
	W0510 19:28:54.399473  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:28:54.399479  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:28:54.399544  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:28:54.437094  459056 cri.go:89] found id: ""
	I0510 19:28:54.437120  459056 logs.go:282] 0 containers: []
	W0510 19:28:54.437129  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:28:54.437135  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:28:54.437213  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:28:54.473964  459056 cri.go:89] found id: ""
	I0510 19:28:54.474000  459056 logs.go:282] 0 containers: []
	W0510 19:28:54.474012  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:28:54.474024  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:28:54.474037  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:28:54.526415  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:28:54.526458  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:28:54.542142  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:28:54.542177  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:28:54.618555  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:28:54.618582  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:28:54.618600  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:28:54.695979  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:28:54.696026  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:28:57.241583  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:57.259270  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:28:57.259347  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:28:57.297603  459056 cri.go:89] found id: ""
	I0510 19:28:57.297640  459056 logs.go:282] 0 containers: []
	W0510 19:28:57.297648  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:28:57.297664  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:28:57.297734  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:28:57.339031  459056 cri.go:89] found id: ""
	I0510 19:28:57.339063  459056 logs.go:282] 0 containers: []
	W0510 19:28:57.339072  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:28:57.339090  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:28:57.339167  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:28:57.375753  459056 cri.go:89] found id: ""
	I0510 19:28:57.375783  459056 logs.go:282] 0 containers: []
	W0510 19:28:57.375792  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:28:57.375799  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:28:57.375855  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:28:57.414729  459056 cri.go:89] found id: ""
	I0510 19:28:57.414758  459056 logs.go:282] 0 containers: []
	W0510 19:28:57.414770  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:28:57.414779  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:28:57.414854  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:28:57.453265  459056 cri.go:89] found id: ""
	I0510 19:28:57.453298  459056 logs.go:282] 0 containers: []
	W0510 19:28:57.453309  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:28:57.453318  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:28:57.453379  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:28:57.491548  459056 cri.go:89] found id: ""
	I0510 19:28:57.491579  459056 logs.go:282] 0 containers: []
	W0510 19:28:57.491587  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:28:57.491594  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:28:57.491670  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:28:57.529795  459056 cri.go:89] found id: ""
	I0510 19:28:57.529822  459056 logs.go:282] 0 containers: []
	W0510 19:28:57.529831  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:28:57.529837  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:28:57.529901  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:28:57.570146  459056 cri.go:89] found id: ""
	I0510 19:28:57.570177  459056 logs.go:282] 0 containers: []
	W0510 19:28:57.570186  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:28:57.570196  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:28:57.570211  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:28:57.622879  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:28:57.622928  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:28:57.639210  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:28:57.639256  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:28:57.717348  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:28:57.717382  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:28:57.717399  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:28:57.799663  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:28:57.799716  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:00.351909  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:00.369231  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:00.369300  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:00.419696  459056 cri.go:89] found id: ""
	I0510 19:29:00.419730  459056 logs.go:282] 0 containers: []
	W0510 19:29:00.419740  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:00.419747  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:00.419810  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:00.456741  459056 cri.go:89] found id: ""
	I0510 19:29:00.456847  459056 logs.go:282] 0 containers: []
	W0510 19:29:00.456865  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:00.456874  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:00.456956  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:00.495771  459056 cri.go:89] found id: ""
	I0510 19:29:00.495816  459056 logs.go:282] 0 containers: []
	W0510 19:29:00.495829  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:00.495839  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:00.495919  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:00.541754  459056 cri.go:89] found id: ""
	I0510 19:29:00.541791  459056 logs.go:282] 0 containers: []
	W0510 19:29:00.541803  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:00.541812  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:00.541892  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:00.584200  459056 cri.go:89] found id: ""
	I0510 19:29:00.584230  459056 logs.go:282] 0 containers: []
	W0510 19:29:00.584239  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:00.584245  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:00.584336  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:00.632920  459056 cri.go:89] found id: ""
	I0510 19:29:00.632949  459056 logs.go:282] 0 containers: []
	W0510 19:29:00.632960  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:00.632969  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:00.633033  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:00.684270  459056 cri.go:89] found id: ""
	I0510 19:29:00.684300  459056 logs.go:282] 0 containers: []
	W0510 19:29:00.684309  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:00.684315  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:00.684368  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:00.722259  459056 cri.go:89] found id: ""
	I0510 19:29:00.722292  459056 logs.go:282] 0 containers: []
	W0510 19:29:00.722301  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:00.722311  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:00.722328  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:00.737395  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:00.737431  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:00.816432  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:00.816465  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:00.816485  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:00.900576  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:00.900631  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:00.946239  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:00.946285  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:03.499135  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:03.516795  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:03.516874  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:03.561554  459056 cri.go:89] found id: ""
	I0510 19:29:03.561589  459056 logs.go:282] 0 containers: []
	W0510 19:29:03.561599  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:03.561607  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:03.561674  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:03.604183  459056 cri.go:89] found id: ""
	I0510 19:29:03.604213  459056 logs.go:282] 0 containers: []
	W0510 19:29:03.604221  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:03.604227  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:03.604297  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:03.641319  459056 cri.go:89] found id: ""
	I0510 19:29:03.641350  459056 logs.go:282] 0 containers: []
	W0510 19:29:03.641359  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:03.641366  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:03.641431  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:03.679306  459056 cri.go:89] found id: ""
	I0510 19:29:03.679345  459056 logs.go:282] 0 containers: []
	W0510 19:29:03.679356  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:03.679364  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:03.679444  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:03.720380  459056 cri.go:89] found id: ""
	I0510 19:29:03.720412  459056 logs.go:282] 0 containers: []
	W0510 19:29:03.720420  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:03.720426  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:03.720497  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:03.758115  459056 cri.go:89] found id: ""
	I0510 19:29:03.758183  459056 logs.go:282] 0 containers: []
	W0510 19:29:03.758193  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:03.758206  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:03.758283  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:03.797182  459056 cri.go:89] found id: ""
	I0510 19:29:03.797215  459056 logs.go:282] 0 containers: []
	W0510 19:29:03.797226  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:03.797235  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:03.797294  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:03.837236  459056 cri.go:89] found id: ""
	I0510 19:29:03.837266  459056 logs.go:282] 0 containers: []
	W0510 19:29:03.837274  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:03.837284  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:03.837302  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:03.886362  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:03.886412  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:03.902546  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:03.902581  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:03.980181  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:03.980206  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:03.980219  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:04.060587  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:04.060641  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:06.606310  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:06.633919  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:06.634001  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:06.672938  459056 cri.go:89] found id: ""
	I0510 19:29:06.672969  459056 logs.go:282] 0 containers: []
	W0510 19:29:06.672978  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:06.672986  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:06.673047  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:06.711567  459056 cri.go:89] found id: ""
	I0510 19:29:06.711603  459056 logs.go:282] 0 containers: []
	W0510 19:29:06.711615  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:06.711624  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:06.711710  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:06.752456  459056 cri.go:89] found id: ""
	I0510 19:29:06.752498  459056 logs.go:282] 0 containers: []
	W0510 19:29:06.752510  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:06.752520  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:06.752592  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:06.792722  459056 cri.go:89] found id: ""
	I0510 19:29:06.792755  459056 logs.go:282] 0 containers: []
	W0510 19:29:06.792764  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:06.792771  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:06.792832  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:06.833199  459056 cri.go:89] found id: ""
	I0510 19:29:06.833231  459056 logs.go:282] 0 containers: []
	W0510 19:29:06.833239  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:06.833246  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:06.833300  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:06.871347  459056 cri.go:89] found id: ""
	I0510 19:29:06.871378  459056 logs.go:282] 0 containers: []
	W0510 19:29:06.871386  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:06.871393  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:06.871448  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:06.909447  459056 cri.go:89] found id: ""
	I0510 19:29:06.909478  459056 logs.go:282] 0 containers: []
	W0510 19:29:06.909489  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:06.909497  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:06.909561  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:06.945795  459056 cri.go:89] found id: ""
	I0510 19:29:06.945829  459056 logs.go:282] 0 containers: []
	W0510 19:29:06.945837  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:06.945847  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:06.945861  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:07.028777  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:07.028825  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:07.070640  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:07.070673  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:07.124335  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:07.124383  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:07.140167  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:07.140197  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:07.218319  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:09.718885  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:09.737619  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:09.737701  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:09.775164  459056 cri.go:89] found id: ""
	I0510 19:29:09.775203  459056 logs.go:282] 0 containers: []
	W0510 19:29:09.775211  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:09.775218  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:09.775292  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:09.819357  459056 cri.go:89] found id: ""
	I0510 19:29:09.819395  459056 logs.go:282] 0 containers: []
	W0510 19:29:09.819406  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:09.819415  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:09.819490  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:09.858894  459056 cri.go:89] found id: ""
	I0510 19:29:09.858928  459056 logs.go:282] 0 containers: []
	W0510 19:29:09.858937  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:09.858942  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:09.858996  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:09.895496  459056 cri.go:89] found id: ""
	I0510 19:29:09.895543  459056 logs.go:282] 0 containers: []
	W0510 19:29:09.895554  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:09.895562  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:09.895629  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:09.935443  459056 cri.go:89] found id: ""
	I0510 19:29:09.935476  459056 logs.go:282] 0 containers: []
	W0510 19:29:09.935484  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:09.935490  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:09.935552  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:09.975013  459056 cri.go:89] found id: ""
	I0510 19:29:09.975050  459056 logs.go:282] 0 containers: []
	W0510 19:29:09.975059  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:09.975066  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:09.975122  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:10.017332  459056 cri.go:89] found id: ""
	I0510 19:29:10.017364  459056 logs.go:282] 0 containers: []
	W0510 19:29:10.017372  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:10.017378  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:10.017432  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:10.054109  459056 cri.go:89] found id: ""
	I0510 19:29:10.054145  459056 logs.go:282] 0 containers: []
	W0510 19:29:10.054157  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:10.054169  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:10.054187  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:10.107219  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:10.107275  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:10.122900  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:10.122946  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:10.197374  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:10.197402  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:10.197423  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:10.276176  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:10.276222  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:12.822189  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:12.839516  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:12.839586  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:12.876495  459056 cri.go:89] found id: ""
	I0510 19:29:12.876532  459056 logs.go:282] 0 containers: []
	W0510 19:29:12.876544  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:12.876553  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:12.876628  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:12.914537  459056 cri.go:89] found id: ""
	I0510 19:29:12.914571  459056 logs.go:282] 0 containers: []
	W0510 19:29:12.914581  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:12.914587  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:12.914662  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:12.953369  459056 cri.go:89] found id: ""
	I0510 19:29:12.953403  459056 logs.go:282] 0 containers: []
	W0510 19:29:12.953412  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:12.953418  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:12.953475  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:12.991117  459056 cri.go:89] found id: ""
	I0510 19:29:12.991150  459056 logs.go:282] 0 containers: []
	W0510 19:29:12.991159  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:12.991167  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:12.991226  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:13.035209  459056 cri.go:89] found id: ""
	I0510 19:29:13.035268  459056 logs.go:282] 0 containers: []
	W0510 19:29:13.035281  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:13.035290  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:13.035364  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:13.072156  459056 cri.go:89] found id: ""
	I0510 19:29:13.072191  459056 logs.go:282] 0 containers: []
	W0510 19:29:13.072203  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:13.072211  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:13.072279  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:13.108863  459056 cri.go:89] found id: ""
	I0510 19:29:13.108893  459056 logs.go:282] 0 containers: []
	W0510 19:29:13.108903  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:13.108910  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:13.108967  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:13.155406  459056 cri.go:89] found id: ""
	I0510 19:29:13.155437  459056 logs.go:282] 0 containers: []
	W0510 19:29:13.155445  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:13.155455  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:13.155467  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:13.208638  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:13.208694  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:13.225071  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:13.225107  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:13.300472  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:13.300498  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:13.300515  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:13.380669  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:13.380714  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:15.924108  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:15.941384  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:15.941465  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:15.984230  459056 cri.go:89] found id: ""
	I0510 19:29:15.984259  459056 logs.go:282] 0 containers: []
	W0510 19:29:15.984267  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:15.984273  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:15.984328  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:16.022696  459056 cri.go:89] found id: ""
	I0510 19:29:16.022725  459056 logs.go:282] 0 containers: []
	W0510 19:29:16.022733  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:16.022740  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:16.022818  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:16.064311  459056 cri.go:89] found id: ""
	I0510 19:29:16.064344  459056 logs.go:282] 0 containers: []
	W0510 19:29:16.064356  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:16.064364  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:16.064432  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:16.110646  459056 cri.go:89] found id: ""
	I0510 19:29:16.110680  459056 logs.go:282] 0 containers: []
	W0510 19:29:16.110688  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:16.110695  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:16.110779  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:16.155423  459056 cri.go:89] found id: ""
	I0510 19:29:16.155466  459056 logs.go:282] 0 containers: []
	W0510 19:29:16.155478  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:16.155485  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:16.155560  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:16.199404  459056 cri.go:89] found id: ""
	I0510 19:29:16.199437  459056 logs.go:282] 0 containers: []
	W0510 19:29:16.199445  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:16.199455  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:16.199518  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:16.244501  459056 cri.go:89] found id: ""
	I0510 19:29:16.244532  459056 logs.go:282] 0 containers: []
	W0510 19:29:16.244541  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:16.244547  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:16.244622  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:16.289564  459056 cri.go:89] found id: ""
	I0510 19:29:16.289594  459056 logs.go:282] 0 containers: []
	W0510 19:29:16.289609  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:16.289628  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:16.289645  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:16.339326  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:16.339360  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:16.392002  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:16.392050  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:16.408009  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:16.408039  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:16.480932  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:16.480959  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:16.480972  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:19.062321  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:19.079587  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:19.079667  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:19.122776  459056 cri.go:89] found id: ""
	I0510 19:29:19.122809  459056 logs.go:282] 0 containers: []
	W0510 19:29:19.122817  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:19.122823  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:19.122882  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:19.160116  459056 cri.go:89] found id: ""
	I0510 19:29:19.160154  459056 logs.go:282] 0 containers: []
	W0510 19:29:19.160166  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:19.160175  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:19.160258  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:19.198049  459056 cri.go:89] found id: ""
	I0510 19:29:19.198081  459056 logs.go:282] 0 containers: []
	W0510 19:29:19.198089  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:19.198095  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:19.198151  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:19.236547  459056 cri.go:89] found id: ""
	I0510 19:29:19.236578  459056 logs.go:282] 0 containers: []
	W0510 19:29:19.236587  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:19.236596  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:19.236682  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:19.274688  459056 cri.go:89] found id: ""
	I0510 19:29:19.274727  459056 logs.go:282] 0 containers: []
	W0510 19:29:19.274738  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:19.274746  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:19.274819  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:19.317813  459056 cri.go:89] found id: ""
	I0510 19:29:19.317843  459056 logs.go:282] 0 containers: []
	W0510 19:29:19.317853  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:19.317865  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:19.317934  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:19.360619  459056 cri.go:89] found id: ""
	I0510 19:29:19.360654  459056 logs.go:282] 0 containers: []
	W0510 19:29:19.360663  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:19.360669  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:19.360735  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:19.399001  459056 cri.go:89] found id: ""
	I0510 19:29:19.399030  459056 logs.go:282] 0 containers: []
	W0510 19:29:19.399038  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:19.399048  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:19.399061  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:19.482768  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:19.482819  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:19.525273  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:19.525316  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:19.579149  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:19.579197  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:19.594813  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:19.594853  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:19.667950  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:22.169701  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:22.187665  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:22.187746  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:22.227992  459056 cri.go:89] found id: ""
	I0510 19:29:22.228022  459056 logs.go:282] 0 containers: []
	W0510 19:29:22.228030  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:22.228041  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:22.228164  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:22.267106  459056 cri.go:89] found id: ""
	I0510 19:29:22.267140  459056 logs.go:282] 0 containers: []
	W0510 19:29:22.267149  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:22.267155  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:22.267211  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:22.305600  459056 cri.go:89] found id: ""
	I0510 19:29:22.305628  459056 logs.go:282] 0 containers: []
	W0510 19:29:22.305636  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:22.305643  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:22.305711  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:22.345673  459056 cri.go:89] found id: ""
	I0510 19:29:22.345708  459056 logs.go:282] 0 containers: []
	W0510 19:29:22.345719  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:22.345724  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:22.345778  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:22.384325  459056 cri.go:89] found id: ""
	I0510 19:29:22.384358  459056 logs.go:282] 0 containers: []
	W0510 19:29:22.384371  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:22.384387  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:22.384467  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:22.424747  459056 cri.go:89] found id: ""
	I0510 19:29:22.424779  459056 logs.go:282] 0 containers: []
	W0510 19:29:22.424787  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:22.424794  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:22.424848  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:22.470878  459056 cri.go:89] found id: ""
	I0510 19:29:22.470916  459056 logs.go:282] 0 containers: []
	W0510 19:29:22.470929  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:22.470937  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:22.471010  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:22.515651  459056 cri.go:89] found id: ""
	I0510 19:29:22.515682  459056 logs.go:282] 0 containers: []
	W0510 19:29:22.515693  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:22.515713  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:22.515730  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:22.573654  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:22.573699  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:22.590599  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:22.590637  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:22.670834  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:22.670866  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:22.670882  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:22.754958  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:22.755019  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:25.299898  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:25.317959  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:25.318047  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:25.358952  459056 cri.go:89] found id: ""
	I0510 19:29:25.358990  459056 logs.go:282] 0 containers: []
	W0510 19:29:25.358999  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:25.359005  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:25.359068  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:25.402269  459056 cri.go:89] found id: ""
	I0510 19:29:25.402300  459056 logs.go:282] 0 containers: []
	W0510 19:29:25.402308  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:25.402321  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:25.402402  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:25.441309  459056 cri.go:89] found id: ""
	I0510 19:29:25.441338  459056 logs.go:282] 0 containers: []
	W0510 19:29:25.441348  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:25.441357  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:25.441421  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:25.477026  459056 cri.go:89] found id: ""
	I0510 19:29:25.477073  459056 logs.go:282] 0 containers: []
	W0510 19:29:25.477087  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:25.477095  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:25.477168  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:25.514227  459056 cri.go:89] found id: ""
	I0510 19:29:25.514263  459056 logs.go:282] 0 containers: []
	W0510 19:29:25.514274  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:25.514283  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:25.514357  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:25.552961  459056 cri.go:89] found id: ""
	I0510 19:29:25.552993  459056 logs.go:282] 0 containers: []
	W0510 19:29:25.553002  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:25.553010  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:25.553075  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:25.591284  459056 cri.go:89] found id: ""
	I0510 19:29:25.591315  459056 logs.go:282] 0 containers: []
	W0510 19:29:25.591327  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:25.591336  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:25.591404  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:25.631688  459056 cri.go:89] found id: ""
	I0510 19:29:25.631720  459056 logs.go:282] 0 containers: []
	W0510 19:29:25.631728  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:25.631737  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:25.631750  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:25.686015  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:25.686057  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:25.702233  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:25.702271  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:25.777340  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:25.777373  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:25.777389  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:25.857072  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:25.857118  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:28.400902  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:28.418498  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:28.418570  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:28.454908  459056 cri.go:89] found id: ""
	I0510 19:29:28.454941  459056 logs.go:282] 0 containers: []
	W0510 19:29:28.454950  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:28.454956  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:28.455014  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:28.493646  459056 cri.go:89] found id: ""
	I0510 19:29:28.493682  459056 logs.go:282] 0 containers: []
	W0510 19:29:28.493691  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:28.493700  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:28.493766  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:28.531482  459056 cri.go:89] found id: ""
	I0510 19:29:28.531524  459056 logs.go:282] 0 containers: []
	W0510 19:29:28.531537  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:28.531546  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:28.531618  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:28.568042  459056 cri.go:89] found id: ""
	I0510 19:29:28.568078  459056 logs.go:282] 0 containers: []
	W0510 19:29:28.568087  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:28.568093  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:28.568150  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:28.607141  459056 cri.go:89] found id: ""
	I0510 19:29:28.607172  459056 logs.go:282] 0 containers: []
	W0510 19:29:28.607181  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:28.607187  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:28.607271  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:28.645485  459056 cri.go:89] found id: ""
	I0510 19:29:28.645519  459056 logs.go:282] 0 containers: []
	W0510 19:29:28.645532  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:28.645544  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:28.645618  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:28.685596  459056 cri.go:89] found id: ""
	I0510 19:29:28.685638  459056 logs.go:282] 0 containers: []
	W0510 19:29:28.685649  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:28.685657  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:28.685724  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:28.724977  459056 cri.go:89] found id: ""
	I0510 19:29:28.725005  459056 logs.go:282] 0 containers: []
	W0510 19:29:28.725013  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:28.725023  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:28.725101  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:28.777421  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:28.777476  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:28.793767  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:28.793806  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:28.865581  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:28.865611  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:28.865638  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:28.945845  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:28.945895  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:31.491500  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:31.508822  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:31.508896  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:31.546371  459056 cri.go:89] found id: ""
	I0510 19:29:31.546400  459056 logs.go:282] 0 containers: []
	W0510 19:29:31.546412  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:31.546420  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:31.546478  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:31.588214  459056 cri.go:89] found id: ""
	I0510 19:29:31.588244  459056 logs.go:282] 0 containers: []
	W0510 19:29:31.588252  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:31.588258  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:31.588313  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:31.626683  459056 cri.go:89] found id: ""
	I0510 19:29:31.626718  459056 logs.go:282] 0 containers: []
	W0510 19:29:31.626729  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:31.626737  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:31.626810  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:31.665979  459056 cri.go:89] found id: ""
	I0510 19:29:31.666013  459056 logs.go:282] 0 containers: []
	W0510 19:29:31.666023  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:31.666030  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:31.666087  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:31.702718  459056 cri.go:89] found id: ""
	I0510 19:29:31.702751  459056 logs.go:282] 0 containers: []
	W0510 19:29:31.702767  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:31.702775  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:31.702830  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:31.740496  459056 cri.go:89] found id: ""
	I0510 19:29:31.740530  459056 logs.go:282] 0 containers: []
	W0510 19:29:31.740553  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:31.740561  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:31.740616  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:31.782178  459056 cri.go:89] found id: ""
	I0510 19:29:31.782209  459056 logs.go:282] 0 containers: []
	W0510 19:29:31.782218  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:31.782224  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:31.782278  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:31.817466  459056 cri.go:89] found id: ""
	I0510 19:29:31.817495  459056 logs.go:282] 0 containers: []
	W0510 19:29:31.817503  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:31.817512  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:31.817527  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:31.832641  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:31.832675  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:31.913719  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:31.913745  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:31.913764  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:31.990267  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:31.990316  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:32.033353  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:32.033384  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:34.586504  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:34.606546  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:34.606628  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:34.644492  459056 cri.go:89] found id: ""
	I0510 19:29:34.644526  459056 logs.go:282] 0 containers: []
	W0510 19:29:34.644539  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:34.644547  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:34.644616  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:34.684520  459056 cri.go:89] found id: ""
	I0510 19:29:34.684550  459056 logs.go:282] 0 containers: []
	W0510 19:29:34.684566  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:34.684572  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:34.684627  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:34.722015  459056 cri.go:89] found id: ""
	I0510 19:29:34.722047  459056 logs.go:282] 0 containers: []
	W0510 19:29:34.722055  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:34.722062  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:34.722118  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:34.760175  459056 cri.go:89] found id: ""
	I0510 19:29:34.760203  459056 logs.go:282] 0 containers: []
	W0510 19:29:34.760212  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:34.760219  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:34.760291  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:34.797742  459056 cri.go:89] found id: ""
	I0510 19:29:34.797775  459056 logs.go:282] 0 containers: []
	W0510 19:29:34.797787  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:34.797796  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:34.797870  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:34.834792  459056 cri.go:89] found id: ""
	I0510 19:29:34.834824  459056 logs.go:282] 0 containers: []
	W0510 19:29:34.834832  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:34.834839  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:34.834905  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:34.881683  459056 cri.go:89] found id: ""
	I0510 19:29:34.881720  459056 logs.go:282] 0 containers: []
	W0510 19:29:34.881729  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:34.881738  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:34.881815  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:34.925574  459056 cri.go:89] found id: ""
	I0510 19:29:34.925605  459056 logs.go:282] 0 containers: []
	W0510 19:29:34.925613  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:34.925622  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:34.925636  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:34.977426  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:34.977477  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:34.993190  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:34.993226  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:35.071565  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:35.071590  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:35.071604  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:35.149510  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:35.149563  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:37.697052  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:37.714716  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:37.714828  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:37.752850  459056 cri.go:89] found id: ""
	I0510 19:29:37.752896  459056 logs.go:282] 0 containers: []
	W0510 19:29:37.752909  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:37.752916  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:37.752989  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:37.791810  459056 cri.go:89] found id: ""
	I0510 19:29:37.791847  459056 logs.go:282] 0 containers: []
	W0510 19:29:37.791860  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:37.791868  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:37.791929  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:37.831622  459056 cri.go:89] found id: ""
	I0510 19:29:37.831658  459056 logs.go:282] 0 containers: []
	W0510 19:29:37.831669  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:37.831677  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:37.831755  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:37.873390  459056 cri.go:89] found id: ""
	I0510 19:29:37.873419  459056 logs.go:282] 0 containers: []
	W0510 19:29:37.873427  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:37.873434  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:37.873493  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:37.915385  459056 cri.go:89] found id: ""
	I0510 19:29:37.915421  459056 logs.go:282] 0 containers: []
	W0510 19:29:37.915431  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:37.915439  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:37.915517  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:37.953620  459056 cri.go:89] found id: ""
	I0510 19:29:37.953654  459056 logs.go:282] 0 containers: []
	W0510 19:29:37.953666  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:37.953678  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:37.953772  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:37.991282  459056 cri.go:89] found id: ""
	I0510 19:29:37.991315  459056 logs.go:282] 0 containers: []
	W0510 19:29:37.991328  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:37.991338  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:37.991413  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:38.028482  459056 cri.go:89] found id: ""
	I0510 19:29:38.028520  459056 logs.go:282] 0 containers: []
	W0510 19:29:38.028531  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:38.028545  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:38.028563  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:38.083448  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:38.083506  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:38.099016  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:38.099067  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:38.174538  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:38.174572  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:38.174587  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:38.258394  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:38.258443  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:40.803473  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:40.821814  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:40.821912  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:40.860566  459056 cri.go:89] found id: ""
	I0510 19:29:40.860600  459056 logs.go:282] 0 containers: []
	W0510 19:29:40.860612  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:40.860622  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:40.860683  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:40.897132  459056 cri.go:89] found id: ""
	I0510 19:29:40.897161  459056 logs.go:282] 0 containers: []
	W0510 19:29:40.897169  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:40.897177  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:40.897239  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:40.944583  459056 cri.go:89] found id: ""
	I0510 19:29:40.944622  459056 logs.go:282] 0 containers: []
	W0510 19:29:40.944636  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:40.944645  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:40.944715  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:40.983132  459056 cri.go:89] found id: ""
	I0510 19:29:40.983165  459056 logs.go:282] 0 containers: []
	W0510 19:29:40.983176  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:40.983185  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:40.983283  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:41.020441  459056 cri.go:89] found id: ""
	I0510 19:29:41.020477  459056 logs.go:282] 0 containers: []
	W0510 19:29:41.020486  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:41.020494  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:41.020548  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:41.058522  459056 cri.go:89] found id: ""
	I0510 19:29:41.058562  459056 logs.go:282] 0 containers: []
	W0510 19:29:41.058572  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:41.058579  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:41.058635  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:41.098730  459056 cri.go:89] found id: ""
	I0510 19:29:41.098775  459056 logs.go:282] 0 containers: []
	W0510 19:29:41.098785  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:41.098792  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:41.098854  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:41.139270  459056 cri.go:89] found id: ""
	I0510 19:29:41.139302  459056 logs.go:282] 0 containers: []
	W0510 19:29:41.139310  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:41.139322  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:41.139335  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:41.215383  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:41.215434  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:41.258268  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:41.258314  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:41.313241  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:41.313287  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:41.332109  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:41.332148  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:41.433376  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:43.935156  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:43.953570  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:43.953694  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:43.994014  459056 cri.go:89] found id: ""
	I0510 19:29:43.994049  459056 logs.go:282] 0 containers: []
	W0510 19:29:43.994075  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:43.994083  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:43.994158  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:44.033884  459056 cri.go:89] found id: ""
	I0510 19:29:44.033922  459056 logs.go:282] 0 containers: []
	W0510 19:29:44.033932  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:44.033942  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:44.033999  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:44.075902  459056 cri.go:89] found id: ""
	I0510 19:29:44.075941  459056 logs.go:282] 0 containers: []
	W0510 19:29:44.075950  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:44.075956  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:44.076018  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:44.116711  459056 cri.go:89] found id: ""
	I0510 19:29:44.116745  459056 logs.go:282] 0 containers: []
	W0510 19:29:44.116757  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:44.116779  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:44.116853  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:44.157617  459056 cri.go:89] found id: ""
	I0510 19:29:44.157652  459056 logs.go:282] 0 containers: []
	W0510 19:29:44.157661  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:44.157668  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:44.157727  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:44.197634  459056 cri.go:89] found id: ""
	I0510 19:29:44.197671  459056 logs.go:282] 0 containers: []
	W0510 19:29:44.197679  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:44.197685  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:44.197743  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:44.235756  459056 cri.go:89] found id: ""
	I0510 19:29:44.235797  459056 logs.go:282] 0 containers: []
	W0510 19:29:44.235810  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:44.235818  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:44.235879  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:44.274251  459056 cri.go:89] found id: ""
	I0510 19:29:44.274292  459056 logs.go:282] 0 containers: []
	W0510 19:29:44.274305  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:44.274317  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:44.274337  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:44.318629  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:44.318669  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:44.370941  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:44.370987  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:44.386660  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:44.386697  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:44.463056  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:44.463085  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:44.463103  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:47.046858  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:47.068619  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:47.068705  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:47.119108  459056 cri.go:89] found id: ""
	I0510 19:29:47.119138  459056 logs.go:282] 0 containers: []
	W0510 19:29:47.119148  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:47.119154  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:47.119210  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:47.160941  459056 cri.go:89] found id: ""
	I0510 19:29:47.160974  459056 logs.go:282] 0 containers: []
	W0510 19:29:47.160982  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:47.160988  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:47.161050  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:47.210420  459056 cri.go:89] found id: ""
	I0510 19:29:47.210452  459056 logs.go:282] 0 containers: []
	W0510 19:29:47.210460  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:47.210466  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:47.210520  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:47.250554  459056 cri.go:89] found id: ""
	I0510 19:29:47.250591  459056 logs.go:282] 0 containers: []
	W0510 19:29:47.250600  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:47.250612  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:47.250674  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:47.290621  459056 cri.go:89] found id: ""
	I0510 19:29:47.290656  459056 logs.go:282] 0 containers: []
	W0510 19:29:47.290667  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:47.290676  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:47.290749  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:47.331044  459056 cri.go:89] found id: ""
	I0510 19:29:47.331079  459056 logs.go:282] 0 containers: []
	W0510 19:29:47.331091  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:47.331100  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:47.331162  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:47.369926  459056 cri.go:89] found id: ""
	I0510 19:29:47.369958  459056 logs.go:282] 0 containers: []
	W0510 19:29:47.369967  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:47.369973  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:47.370047  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:47.410658  459056 cri.go:89] found id: ""
	I0510 19:29:47.410699  459056 logs.go:282] 0 containers: []
	W0510 19:29:47.410708  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:47.410723  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:47.410737  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:47.489045  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:47.489100  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:47.536078  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:47.536117  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:47.588663  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:47.588727  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:47.606182  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:47.606220  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:47.680331  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:50.180849  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:50.198636  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:50.198740  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:50.238270  459056 cri.go:89] found id: ""
	I0510 19:29:50.238301  459056 logs.go:282] 0 containers: []
	W0510 19:29:50.238314  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:50.238323  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:50.238399  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:50.276207  459056 cri.go:89] found id: ""
	I0510 19:29:50.276244  459056 logs.go:282] 0 containers: []
	W0510 19:29:50.276256  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:50.276264  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:50.276333  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:50.311826  459056 cri.go:89] found id: ""
	I0510 19:29:50.311864  459056 logs.go:282] 0 containers: []
	W0510 19:29:50.311875  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:50.311884  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:50.311961  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:50.347100  459056 cri.go:89] found id: ""
	I0510 19:29:50.347133  459056 logs.go:282] 0 containers: []
	W0510 19:29:50.347142  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:50.347151  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:50.347229  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:50.382788  459056 cri.go:89] found id: ""
	I0510 19:29:50.382816  459056 logs.go:282] 0 containers: []
	W0510 19:29:50.382824  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:50.382830  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:50.382898  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:50.420656  459056 cri.go:89] found id: ""
	I0510 19:29:50.420700  459056 logs.go:282] 0 containers: []
	W0510 19:29:50.420709  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:50.420722  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:50.420782  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:50.460911  459056 cri.go:89] found id: ""
	I0510 19:29:50.460948  459056 logs.go:282] 0 containers: []
	W0510 19:29:50.460956  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:50.460962  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:50.461016  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:50.498074  459056 cri.go:89] found id: ""
	I0510 19:29:50.498109  459056 logs.go:282] 0 containers: []
	W0510 19:29:50.498122  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:50.498135  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:50.498152  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:50.576436  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:50.576486  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:50.620554  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:50.620594  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:50.672242  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:50.672292  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:50.688401  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:50.688435  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:50.765125  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:53.266941  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:53.285235  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:53.285306  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:53.327821  459056 cri.go:89] found id: ""
	I0510 19:29:53.327872  459056 logs.go:282] 0 containers: []
	W0510 19:29:53.327880  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:53.327888  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:53.327971  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:53.367170  459056 cri.go:89] found id: ""
	I0510 19:29:53.367212  459056 logs.go:282] 0 containers: []
	W0510 19:29:53.367224  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:53.367254  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:53.367338  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:53.411071  459056 cri.go:89] found id: ""
	I0510 19:29:53.411104  459056 logs.go:282] 0 containers: []
	W0510 19:29:53.411112  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:53.411119  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:53.411194  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:53.451093  459056 cri.go:89] found id: ""
	I0510 19:29:53.451160  459056 logs.go:282] 0 containers: []
	W0510 19:29:53.451175  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:53.451184  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:53.451278  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:53.490305  459056 cri.go:89] found id: ""
	I0510 19:29:53.490337  459056 logs.go:282] 0 containers: []
	W0510 19:29:53.490345  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:53.490351  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:53.490421  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:53.529657  459056 cri.go:89] found id: ""
	I0510 19:29:53.529703  459056 logs.go:282] 0 containers: []
	W0510 19:29:53.529716  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:53.529728  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:53.529801  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:53.570169  459056 cri.go:89] found id: ""
	I0510 19:29:53.570211  459056 logs.go:282] 0 containers: []
	W0510 19:29:53.570223  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:53.570232  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:53.570300  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:53.613547  459056 cri.go:89] found id: ""
	I0510 19:29:53.613576  459056 logs.go:282] 0 containers: []
	W0510 19:29:53.613584  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:53.613593  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:53.613607  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:53.665574  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:53.665633  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:53.682279  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:53.682319  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:53.760795  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:53.760824  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:53.760843  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:53.844386  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:53.844433  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:56.398332  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:56.416456  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:56.416552  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:56.454203  459056 cri.go:89] found id: ""
	I0510 19:29:56.454240  459056 logs.go:282] 0 containers: []
	W0510 19:29:56.454254  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:56.454265  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:56.454350  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:56.492744  459056 cri.go:89] found id: ""
	I0510 19:29:56.492779  459056 logs.go:282] 0 containers: []
	W0510 19:29:56.492791  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:56.492799  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:56.492893  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:56.529891  459056 cri.go:89] found id: ""
	I0510 19:29:56.529924  459056 logs.go:282] 0 containers: []
	W0510 19:29:56.529933  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:56.529943  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:56.530000  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:56.566697  459056 cri.go:89] found id: ""
	I0510 19:29:56.566732  459056 logs.go:282] 0 containers: []
	W0510 19:29:56.566743  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:56.566752  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:56.566816  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:56.608258  459056 cri.go:89] found id: ""
	I0510 19:29:56.608295  459056 logs.go:282] 0 containers: []
	W0510 19:29:56.608307  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:56.608315  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:56.608384  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:56.648700  459056 cri.go:89] found id: ""
	I0510 19:29:56.648734  459056 logs.go:282] 0 containers: []
	W0510 19:29:56.648746  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:56.648755  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:56.648823  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:56.686623  459056 cri.go:89] found id: ""
	I0510 19:29:56.686661  459056 logs.go:282] 0 containers: []
	W0510 19:29:56.686672  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:56.686680  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:56.686750  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:56.726136  459056 cri.go:89] found id: ""
	I0510 19:29:56.726165  459056 logs.go:282] 0 containers: []
	W0510 19:29:56.726180  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:56.726193  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:56.726209  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:56.777146  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:56.777195  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:56.793496  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:56.793530  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:56.866401  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:56.866436  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:56.866452  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:56.944116  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:56.944168  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:59.488989  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:59.506161  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:59.506233  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:59.542854  459056 cri.go:89] found id: ""
	I0510 19:29:59.542891  459056 logs.go:282] 0 containers: []
	W0510 19:29:59.542900  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:59.542907  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:59.542961  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:59.580216  459056 cri.go:89] found id: ""
	I0510 19:29:59.580257  459056 logs.go:282] 0 containers: []
	W0510 19:29:59.580268  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:59.580276  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:59.580348  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:59.623729  459056 cri.go:89] found id: ""
	I0510 19:29:59.623770  459056 logs.go:282] 0 containers: []
	W0510 19:29:59.623781  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:59.623790  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:59.623854  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:59.662414  459056 cri.go:89] found id: ""
	I0510 19:29:59.662447  459056 logs.go:282] 0 containers: []
	W0510 19:29:59.662455  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:59.662462  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:59.662531  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:59.700471  459056 cri.go:89] found id: ""
	I0510 19:29:59.700505  459056 logs.go:282] 0 containers: []
	W0510 19:29:59.700514  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:59.700520  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:59.700593  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:59.740841  459056 cri.go:89] found id: ""
	I0510 19:29:59.740876  459056 logs.go:282] 0 containers: []
	W0510 19:29:59.740884  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:59.740891  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:59.740944  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:59.782895  459056 cri.go:89] found id: ""
	I0510 19:29:59.782937  459056 logs.go:282] 0 containers: []
	W0510 19:29:59.782946  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:59.782952  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:59.783021  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:59.820556  459056 cri.go:89] found id: ""
	I0510 19:29:59.820591  459056 logs.go:282] 0 containers: []
	W0510 19:29:59.820603  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:59.820615  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:59.820632  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:59.835555  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:59.835591  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:59.907710  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:59.907742  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:59.907758  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:59.983847  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:59.983895  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:00.030738  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:00.030782  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:02.583146  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:02.601217  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:02.601290  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:02.638485  459056 cri.go:89] found id: ""
	I0510 19:30:02.638523  459056 logs.go:282] 0 containers: []
	W0510 19:30:02.638536  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:02.638544  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:02.638625  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:02.676096  459056 cri.go:89] found id: ""
	I0510 19:30:02.676124  459056 logs.go:282] 0 containers: []
	W0510 19:30:02.676132  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:02.676138  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:02.676198  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:02.712753  459056 cri.go:89] found id: ""
	I0510 19:30:02.712794  459056 logs.go:282] 0 containers: []
	W0510 19:30:02.712806  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:02.712814  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:02.712889  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:02.750540  459056 cri.go:89] found id: ""
	I0510 19:30:02.750572  459056 logs.go:282] 0 containers: []
	W0510 19:30:02.750580  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:02.750588  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:02.750666  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:02.789337  459056 cri.go:89] found id: ""
	I0510 19:30:02.789372  459056 logs.go:282] 0 containers: []
	W0510 19:30:02.789386  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:02.789394  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:02.789471  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:02.827044  459056 cri.go:89] found id: ""
	I0510 19:30:02.827076  459056 logs.go:282] 0 containers: []
	W0510 19:30:02.827087  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:02.827094  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:02.827154  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:02.867202  459056 cri.go:89] found id: ""
	I0510 19:30:02.867251  459056 logs.go:282] 0 containers: []
	W0510 19:30:02.867264  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:02.867272  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:02.867336  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:02.906104  459056 cri.go:89] found id: ""
	I0510 19:30:02.906136  459056 logs.go:282] 0 containers: []
	W0510 19:30:02.906145  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:02.906155  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:02.906167  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:02.959451  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:02.959504  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:02.975037  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:02.975074  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:03.051037  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:03.051066  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:03.051083  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:03.132615  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:03.132663  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:05.677564  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:05.695683  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:05.695774  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:05.733222  459056 cri.go:89] found id: ""
	I0510 19:30:05.733253  459056 logs.go:282] 0 containers: []
	W0510 19:30:05.733266  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:05.733273  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:05.733343  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:05.775893  459056 cri.go:89] found id: ""
	I0510 19:30:05.775926  459056 logs.go:282] 0 containers: []
	W0510 19:30:05.775938  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:05.775946  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:05.776013  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:05.814170  459056 cri.go:89] found id: ""
	I0510 19:30:05.814201  459056 logs.go:282] 0 containers: []
	W0510 19:30:05.814209  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:05.814215  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:05.814271  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:05.865156  459056 cri.go:89] found id: ""
	I0510 19:30:05.865185  459056 logs.go:282] 0 containers: []
	W0510 19:30:05.865193  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:05.865200  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:05.865267  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:05.904409  459056 cri.go:89] found id: ""
	I0510 19:30:05.904440  459056 logs.go:282] 0 containers: []
	W0510 19:30:05.904449  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:05.904455  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:05.904516  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:05.948278  459056 cri.go:89] found id: ""
	I0510 19:30:05.948308  459056 logs.go:282] 0 containers: []
	W0510 19:30:05.948316  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:05.948322  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:05.948383  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:05.986379  459056 cri.go:89] found id: ""
	I0510 19:30:05.986415  459056 logs.go:282] 0 containers: []
	W0510 19:30:05.986426  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:05.986435  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:05.986502  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:06.030940  459056 cri.go:89] found id: ""
	I0510 19:30:06.030974  459056 logs.go:282] 0 containers: []
	W0510 19:30:06.030984  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:06.030994  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:06.031007  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:06.081923  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:06.081973  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:06.097288  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:06.097321  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:06.169428  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:06.169457  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:06.169471  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:06.247404  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:06.247457  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:08.791138  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:08.810447  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:08.810527  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:08.849947  459056 cri.go:89] found id: ""
	I0510 19:30:08.849983  459056 logs.go:282] 0 containers: []
	W0510 19:30:08.849996  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:08.850005  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:08.850079  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:08.889474  459056 cri.go:89] found id: ""
	I0510 19:30:08.889511  459056 logs.go:282] 0 containers: []
	W0510 19:30:08.889521  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:08.889530  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:08.889605  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:08.929364  459056 cri.go:89] found id: ""
	I0510 19:30:08.929402  459056 logs.go:282] 0 containers: []
	W0510 19:30:08.929414  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:08.929420  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:08.929481  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:08.970260  459056 cri.go:89] found id: ""
	I0510 19:30:08.970292  459056 logs.go:282] 0 containers: []
	W0510 19:30:08.970301  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:08.970312  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:08.970370  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:09.011080  459056 cri.go:89] found id: ""
	I0510 19:30:09.011114  459056 logs.go:282] 0 containers: []
	W0510 19:30:09.011123  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:09.011130  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:09.011192  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:09.050057  459056 cri.go:89] found id: ""
	I0510 19:30:09.050096  459056 logs.go:282] 0 containers: []
	W0510 19:30:09.050106  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:09.050112  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:09.050177  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:09.089408  459056 cri.go:89] found id: ""
	I0510 19:30:09.089454  459056 logs.go:282] 0 containers: []
	W0510 19:30:09.089467  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:09.089484  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:09.089559  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:09.127502  459056 cri.go:89] found id: ""
	I0510 19:30:09.127533  459056 logs.go:282] 0 containers: []
	W0510 19:30:09.127544  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:09.127555  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:09.127573  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:09.177856  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:09.177903  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:09.194009  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:09.194041  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:09.269803  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:09.269833  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:09.269851  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:09.350498  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:09.350562  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:11.895252  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:11.913748  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:11.913819  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:11.957943  459056 cri.go:89] found id: ""
	I0510 19:30:11.957974  459056 logs.go:282] 0 containers: []
	W0510 19:30:11.957982  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:11.957990  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:11.958059  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:11.999707  459056 cri.go:89] found id: ""
	I0510 19:30:11.999735  459056 logs.go:282] 0 containers: []
	W0510 19:30:11.999743  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:11.999750  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:11.999805  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:12.044866  459056 cri.go:89] found id: ""
	I0510 19:30:12.044905  459056 logs.go:282] 0 containers: []
	W0510 19:30:12.044914  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:12.044922  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:12.044980  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:12.083885  459056 cri.go:89] found id: ""
	I0510 19:30:12.083925  459056 logs.go:282] 0 containers: []
	W0510 19:30:12.083938  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:12.083946  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:12.084014  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:12.124186  459056 cri.go:89] found id: ""
	I0510 19:30:12.124223  459056 logs.go:282] 0 containers: []
	W0510 19:30:12.124232  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:12.124239  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:12.124296  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:12.163773  459056 cri.go:89] found id: ""
	I0510 19:30:12.163809  459056 logs.go:282] 0 containers: []
	W0510 19:30:12.163817  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:12.163824  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:12.163887  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:12.208245  459056 cri.go:89] found id: ""
	I0510 19:30:12.208285  459056 logs.go:282] 0 containers: []
	W0510 19:30:12.208297  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:12.208305  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:12.208378  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:12.248816  459056 cri.go:89] found id: ""
	I0510 19:30:12.248855  459056 logs.go:282] 0 containers: []
	W0510 19:30:12.248871  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:12.248885  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:12.248907  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:12.293098  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:12.293137  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:12.346119  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:12.346166  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:12.362174  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:12.362208  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:12.436485  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:12.436514  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:12.436527  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:15.021483  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:15.039908  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:15.039983  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:15.077291  459056 cri.go:89] found id: ""
	I0510 19:30:15.077323  459056 logs.go:282] 0 containers: []
	W0510 19:30:15.077335  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:15.077344  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:15.077417  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:15.119066  459056 cri.go:89] found id: ""
	I0510 19:30:15.119099  459056 logs.go:282] 0 containers: []
	W0510 19:30:15.119108  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:15.119114  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:15.119169  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:15.158927  459056 cri.go:89] found id: ""
	I0510 19:30:15.158957  459056 logs.go:282] 0 containers: []
	W0510 19:30:15.158968  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:15.158976  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:15.159052  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:15.199423  459056 cri.go:89] found id: ""
	I0510 19:30:15.199458  459056 logs.go:282] 0 containers: []
	W0510 19:30:15.199467  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:15.199474  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:15.199538  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:15.237695  459056 cri.go:89] found id: ""
	I0510 19:30:15.237734  459056 logs.go:282] 0 containers: []
	W0510 19:30:15.237744  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:15.237751  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:15.237822  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:15.280652  459056 cri.go:89] found id: ""
	I0510 19:30:15.280693  459056 logs.go:282] 0 containers: []
	W0510 19:30:15.280705  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:15.280721  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:15.280794  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:15.319730  459056 cri.go:89] found id: ""
	I0510 19:30:15.319767  459056 logs.go:282] 0 containers: []
	W0510 19:30:15.319780  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:15.319788  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:15.319861  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:15.361113  459056 cri.go:89] found id: ""
	I0510 19:30:15.361147  459056 logs.go:282] 0 containers: []
	W0510 19:30:15.361156  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:15.361165  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:15.361178  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:15.424953  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:15.425003  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:15.444155  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:15.444187  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:15.520040  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:15.520067  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:15.520080  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:15.595963  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:15.596013  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:18.142672  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:18.160293  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:18.160373  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:18.197867  459056 cri.go:89] found id: ""
	I0510 19:30:18.197911  459056 logs.go:282] 0 containers: []
	W0510 19:30:18.197920  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:18.197927  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:18.197985  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:18.236491  459056 cri.go:89] found id: ""
	I0510 19:30:18.236519  459056 logs.go:282] 0 containers: []
	W0510 19:30:18.236528  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:18.236535  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:18.236591  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:18.275316  459056 cri.go:89] found id: ""
	I0510 19:30:18.275355  459056 logs.go:282] 0 containers: []
	W0510 19:30:18.275368  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:18.275376  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:18.275447  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:18.314904  459056 cri.go:89] found id: ""
	I0510 19:30:18.314946  459056 logs.go:282] 0 containers: []
	W0510 19:30:18.314963  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:18.314972  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:18.315049  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:18.353877  459056 cri.go:89] found id: ""
	I0510 19:30:18.353906  459056 logs.go:282] 0 containers: []
	W0510 19:30:18.353924  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:18.353933  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:18.354019  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:18.391081  459056 cri.go:89] found id: ""
	I0510 19:30:18.391115  459056 logs.go:282] 0 containers: []
	W0510 19:30:18.391124  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:18.391131  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:18.391208  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:18.430112  459056 cri.go:89] found id: ""
	I0510 19:30:18.430151  459056 logs.go:282] 0 containers: []
	W0510 19:30:18.430165  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:18.430171  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:18.430241  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:18.467247  459056 cri.go:89] found id: ""
	I0510 19:30:18.467282  459056 logs.go:282] 0 containers: []
	W0510 19:30:18.467294  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:18.467307  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:18.467331  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:18.483013  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:18.483049  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:18.556404  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:18.556437  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:18.556457  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:18.634193  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:18.634242  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:18.677713  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:18.677752  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:21.230499  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:21.248397  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:21.248485  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:21.284922  459056 cri.go:89] found id: ""
	I0510 19:30:21.284961  459056 logs.go:282] 0 containers: []
	W0510 19:30:21.284974  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:21.284983  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:21.285062  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:21.323019  459056 cri.go:89] found id: ""
	I0510 19:30:21.323054  459056 logs.go:282] 0 containers: []
	W0510 19:30:21.323064  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:21.323071  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:21.323148  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:21.361809  459056 cri.go:89] found id: ""
	I0510 19:30:21.361838  459056 logs.go:282] 0 containers: []
	W0510 19:30:21.361846  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:21.361852  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:21.361930  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:21.399062  459056 cri.go:89] found id: ""
	I0510 19:30:21.399101  459056 logs.go:282] 0 containers: []
	W0510 19:30:21.399115  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:21.399124  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:21.399195  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:21.436027  459056 cri.go:89] found id: ""
	I0510 19:30:21.436061  459056 logs.go:282] 0 containers: []
	W0510 19:30:21.436071  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:21.436077  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:21.436143  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:21.481101  459056 cri.go:89] found id: ""
	I0510 19:30:21.481141  459056 logs.go:282] 0 containers: []
	W0510 19:30:21.481151  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:21.481158  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:21.481213  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:21.525918  459056 cri.go:89] found id: ""
	I0510 19:30:21.525949  459056 logs.go:282] 0 containers: []
	W0510 19:30:21.525958  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:21.525965  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:21.526051  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:21.566402  459056 cri.go:89] found id: ""
	I0510 19:30:21.566438  459056 logs.go:282] 0 containers: []
	W0510 19:30:21.566451  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:21.566466  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:21.566483  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:21.640295  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:21.640326  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:21.640344  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:21.723808  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:21.723860  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:21.787009  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:21.787053  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:21.846605  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:21.846653  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:24.365273  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:24.382257  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:24.382346  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:24.422109  459056 cri.go:89] found id: ""
	I0510 19:30:24.422145  459056 logs.go:282] 0 containers: []
	W0510 19:30:24.422154  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:24.422161  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:24.422223  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:24.461355  459056 cri.go:89] found id: ""
	I0510 19:30:24.461382  459056 logs.go:282] 0 containers: []
	W0510 19:30:24.461389  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:24.461395  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:24.461451  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:24.500168  459056 cri.go:89] found id: ""
	I0510 19:30:24.500203  459056 logs.go:282] 0 containers: []
	W0510 19:30:24.500214  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:24.500222  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:24.500293  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:24.535437  459056 cri.go:89] found id: ""
	I0510 19:30:24.535473  459056 logs.go:282] 0 containers: []
	W0510 19:30:24.535481  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:24.535487  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:24.535567  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:24.574226  459056 cri.go:89] found id: ""
	I0510 19:30:24.574262  459056 logs.go:282] 0 containers: []
	W0510 19:30:24.574274  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:24.574282  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:24.574353  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:24.611038  459056 cri.go:89] found id: ""
	I0510 19:30:24.611076  459056 logs.go:282] 0 containers: []
	W0510 19:30:24.611085  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:24.611094  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:24.611148  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:24.650255  459056 cri.go:89] found id: ""
	I0510 19:30:24.650291  459056 logs.go:282] 0 containers: []
	W0510 19:30:24.650303  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:24.650313  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:24.650382  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:24.688115  459056 cri.go:89] found id: ""
	I0510 19:30:24.688148  459056 logs.go:282] 0 containers: []
	W0510 19:30:24.688157  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:24.688166  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:24.688180  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:24.738142  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:24.738193  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:24.754027  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:24.754059  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:24.836221  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:24.836251  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:24.836270  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:24.911260  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:24.911306  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:27.453339  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:27.470837  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:27.470922  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:27.510141  459056 cri.go:89] found id: ""
	I0510 19:30:27.510171  459056 logs.go:282] 0 containers: []
	W0510 19:30:27.510180  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:27.510187  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:27.510245  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:27.560311  459056 cri.go:89] found id: ""
	I0510 19:30:27.560337  459056 logs.go:282] 0 containers: []
	W0510 19:30:27.560346  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:27.560352  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:27.560412  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:27.615618  459056 cri.go:89] found id: ""
	I0510 19:30:27.615648  459056 logs.go:282] 0 containers: []
	W0510 19:30:27.615658  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:27.615683  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:27.615745  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:27.663257  459056 cri.go:89] found id: ""
	I0510 19:30:27.663290  459056 logs.go:282] 0 containers: []
	W0510 19:30:27.663298  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:27.663305  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:27.663377  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:27.705815  459056 cri.go:89] found id: ""
	I0510 19:30:27.705856  459056 logs.go:282] 0 containers: []
	W0510 19:30:27.705864  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:27.705870  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:27.705932  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:27.744580  459056 cri.go:89] found id: ""
	I0510 19:30:27.744612  459056 logs.go:282] 0 containers: []
	W0510 19:30:27.744620  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:27.744637  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:27.744694  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:27.781041  459056 cri.go:89] found id: ""
	I0510 19:30:27.781070  459056 logs.go:282] 0 containers: []
	W0510 19:30:27.781078  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:27.781087  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:27.781145  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:27.818543  459056 cri.go:89] found id: ""
	I0510 19:30:27.818583  459056 logs.go:282] 0 containers: []
	W0510 19:30:27.818592  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:27.818603  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:27.818631  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:27.834004  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:27.834038  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:27.907944  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:27.907973  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:27.907991  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:27.988229  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:27.988276  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:28.032107  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:28.032141  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:30.581752  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:30.599095  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:30.599167  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:30.637772  459056 cri.go:89] found id: ""
	I0510 19:30:30.637804  459056 logs.go:282] 0 containers: []
	W0510 19:30:30.637815  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:30.637824  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:30.637894  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:30.674650  459056 cri.go:89] found id: ""
	I0510 19:30:30.674690  459056 logs.go:282] 0 containers: []
	W0510 19:30:30.674702  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:30.674709  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:30.674791  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:30.712335  459056 cri.go:89] found id: ""
	I0510 19:30:30.712370  459056 logs.go:282] 0 containers: []
	W0510 19:30:30.712379  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:30.712384  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:30.712457  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:30.749850  459056 cri.go:89] found id: ""
	I0510 19:30:30.749894  459056 logs.go:282] 0 containers: []
	W0510 19:30:30.749906  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:30.749914  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:30.750001  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:30.790937  459056 cri.go:89] found id: ""
	I0510 19:30:30.790976  459056 logs.go:282] 0 containers: []
	W0510 19:30:30.790985  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:30.790992  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:30.791048  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:30.830223  459056 cri.go:89] found id: ""
	I0510 19:30:30.830256  459056 logs.go:282] 0 containers: []
	W0510 19:30:30.830265  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:30.830271  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:30.830335  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:30.868658  459056 cri.go:89] found id: ""
	I0510 19:30:30.868685  459056 logs.go:282] 0 containers: []
	W0510 19:30:30.868693  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:30.868699  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:30.868755  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:30.908485  459056 cri.go:89] found id: ""
	I0510 19:30:30.908518  459056 logs.go:282] 0 containers: []
	W0510 19:30:30.908527  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:30.908537  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:30.908576  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:30.987890  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:30.987915  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:30.987930  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:31.066668  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:31.066724  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:31.114289  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:31.114322  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:31.168049  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:31.168101  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:33.685815  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:33.702996  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:33.703075  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:33.740679  459056 cri.go:89] found id: ""
	I0510 19:30:33.740710  459056 logs.go:282] 0 containers: []
	W0510 19:30:33.740718  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:33.740724  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:33.740789  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:33.778013  459056 cri.go:89] found id: ""
	I0510 19:30:33.778045  459056 logs.go:282] 0 containers: []
	W0510 19:30:33.778053  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:33.778059  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:33.778118  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:33.819601  459056 cri.go:89] found id: ""
	I0510 19:30:33.819634  459056 logs.go:282] 0 containers: []
	W0510 19:30:33.819643  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:33.819649  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:33.819719  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:33.858368  459056 cri.go:89] found id: ""
	I0510 19:30:33.858399  459056 logs.go:282] 0 containers: []
	W0510 19:30:33.858407  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:33.858414  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:33.858469  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:33.899175  459056 cri.go:89] found id: ""
	I0510 19:30:33.899210  459056 logs.go:282] 0 containers: []
	W0510 19:30:33.899219  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:33.899225  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:33.899297  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:33.938037  459056 cri.go:89] found id: ""
	I0510 19:30:33.938075  459056 logs.go:282] 0 containers: []
	W0510 19:30:33.938085  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:33.938092  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:33.938151  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:33.976364  459056 cri.go:89] found id: ""
	I0510 19:30:33.976398  459056 logs.go:282] 0 containers: []
	W0510 19:30:33.976408  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:33.976415  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:33.976474  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:34.019444  459056 cri.go:89] found id: ""
	I0510 19:30:34.019476  459056 logs.go:282] 0 containers: []
	W0510 19:30:34.019485  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:34.019496  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:34.019509  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:34.066863  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:34.066897  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:34.116346  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:34.116394  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:34.131809  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:34.131842  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:34.201228  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:34.201261  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:34.201278  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:36.784883  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:36.802185  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:36.802277  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:36.838342  459056 cri.go:89] found id: ""
	I0510 19:30:36.838382  459056 logs.go:282] 0 containers: []
	W0510 19:30:36.838395  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:36.838405  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:36.838484  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:36.875021  459056 cri.go:89] found id: ""
	I0510 19:30:36.875052  459056 logs.go:282] 0 containers: []
	W0510 19:30:36.875060  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:36.875066  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:36.875136  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:36.912550  459056 cri.go:89] found id: ""
	I0510 19:30:36.912579  459056 logs.go:282] 0 containers: []
	W0510 19:30:36.912589  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:36.912595  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:36.912672  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:36.953970  459056 cri.go:89] found id: ""
	I0510 19:30:36.954002  459056 logs.go:282] 0 containers: []
	W0510 19:30:36.954013  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:36.954021  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:36.954090  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:36.990198  459056 cri.go:89] found id: ""
	I0510 19:30:36.990227  459056 logs.go:282] 0 containers: []
	W0510 19:30:36.990236  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:36.990242  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:36.990315  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:37.026559  459056 cri.go:89] found id: ""
	I0510 19:30:37.026594  459056 logs.go:282] 0 containers: []
	W0510 19:30:37.026604  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:37.026612  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:37.026696  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:37.063080  459056 cri.go:89] found id: ""
	I0510 19:30:37.063112  459056 logs.go:282] 0 containers: []
	W0510 19:30:37.063120  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:37.063127  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:37.063181  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:37.099746  459056 cri.go:89] found id: ""
	I0510 19:30:37.099786  459056 logs.go:282] 0 containers: []
	W0510 19:30:37.099800  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:37.099814  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:37.099831  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:37.150884  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:37.150932  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:37.166536  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:37.166568  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:37.241013  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:37.241045  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:37.241062  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:37.319328  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:37.319370  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:39.863629  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:39.881255  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:39.881331  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:39.921099  459056 cri.go:89] found id: ""
	I0510 19:30:39.921128  459056 logs.go:282] 0 containers: []
	W0510 19:30:39.921136  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:39.921142  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:39.921208  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:39.958588  459056 cri.go:89] found id: ""
	I0510 19:30:39.958620  459056 logs.go:282] 0 containers: []
	W0510 19:30:39.958629  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:39.958634  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:39.958701  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:39.995129  459056 cri.go:89] found id: ""
	I0510 19:30:39.995160  459056 logs.go:282] 0 containers: []
	W0510 19:30:39.995168  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:39.995174  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:39.995230  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:40.031278  459056 cri.go:89] found id: ""
	I0510 19:30:40.031308  459056 logs.go:282] 0 containers: []
	W0510 19:30:40.031320  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:40.031328  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:40.031399  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:40.069662  459056 cri.go:89] found id: ""
	I0510 19:30:40.069694  459056 logs.go:282] 0 containers: []
	W0510 19:30:40.069703  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:40.069708  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:40.069769  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:40.106418  459056 cri.go:89] found id: ""
	I0510 19:30:40.106452  459056 logs.go:282] 0 containers: []
	W0510 19:30:40.106464  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:40.106474  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:40.106546  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:40.143694  459056 cri.go:89] found id: ""
	I0510 19:30:40.143728  459056 logs.go:282] 0 containers: []
	W0510 19:30:40.143737  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:40.143743  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:40.143812  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:40.178265  459056 cri.go:89] found id: ""
	I0510 19:30:40.178296  459056 logs.go:282] 0 containers: []
	W0510 19:30:40.178304  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:40.178314  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:40.178328  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:40.247907  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:40.247940  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:40.247959  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:40.321933  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:40.321985  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:40.368947  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:40.368991  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:40.419749  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:40.419791  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:42.936834  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:42.954258  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:42.954332  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:42.991570  459056 cri.go:89] found id: ""
	I0510 19:30:42.991603  459056 logs.go:282] 0 containers: []
	W0510 19:30:42.991611  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:42.991617  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:42.991685  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:43.029718  459056 cri.go:89] found id: ""
	I0510 19:30:43.029751  459056 logs.go:282] 0 containers: []
	W0510 19:30:43.029759  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:43.029766  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:43.029824  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:43.068297  459056 cri.go:89] found id: ""
	I0510 19:30:43.068328  459056 logs.go:282] 0 containers: []
	W0510 19:30:43.068335  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:43.068342  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:43.068405  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:43.109805  459056 cri.go:89] found id: ""
	I0510 19:30:43.109833  459056 logs.go:282] 0 containers: []
	W0510 19:30:43.109841  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:43.109847  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:43.109900  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:43.148109  459056 cri.go:89] found id: ""
	I0510 19:30:43.148141  459056 logs.go:282] 0 containers: []
	W0510 19:30:43.148149  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:43.148156  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:43.148224  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:43.185187  459056 cri.go:89] found id: ""
	I0510 19:30:43.185221  459056 logs.go:282] 0 containers: []
	W0510 19:30:43.185230  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:43.185239  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:43.185293  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:43.224447  459056 cri.go:89] found id: ""
	I0510 19:30:43.224476  459056 logs.go:282] 0 containers: []
	W0510 19:30:43.224485  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:43.224496  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:43.224552  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:43.268442  459056 cri.go:89] found id: ""
	I0510 19:30:43.268471  459056 logs.go:282] 0 containers: []
	W0510 19:30:43.268480  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:43.268489  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:43.268501  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:43.347249  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:43.347282  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:43.347307  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:43.427928  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:43.427975  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:43.473221  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:43.473258  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:43.522748  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:43.522796  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:46.040289  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:46.058969  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:46.059051  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:46.102709  459056 cri.go:89] found id: ""
	I0510 19:30:46.102757  459056 logs.go:282] 0 containers: []
	W0510 19:30:46.102775  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:46.102786  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:46.102848  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:46.146551  459056 cri.go:89] found id: ""
	I0510 19:30:46.146584  459056 logs.go:282] 0 containers: []
	W0510 19:30:46.146593  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:46.146599  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:46.146670  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:46.187445  459056 cri.go:89] found id: ""
	I0510 19:30:46.187484  459056 logs.go:282] 0 containers: []
	W0510 19:30:46.187498  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:46.187505  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:46.187575  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:46.224647  459056 cri.go:89] found id: ""
	I0510 19:30:46.224686  459056 logs.go:282] 0 containers: []
	W0510 19:30:46.224697  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:46.224706  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:46.224786  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:46.263513  459056 cri.go:89] found id: ""
	I0510 19:30:46.263545  459056 logs.go:282] 0 containers: []
	W0510 19:30:46.263554  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:46.263560  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:46.263639  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:46.300049  459056 cri.go:89] found id: ""
	I0510 19:30:46.300085  459056 logs.go:282] 0 containers: []
	W0510 19:30:46.300096  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:46.300104  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:46.300174  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:46.337107  459056 cri.go:89] found id: ""
	I0510 19:30:46.337139  459056 logs.go:282] 0 containers: []
	W0510 19:30:46.337150  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:46.337159  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:46.337219  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:46.373699  459056 cri.go:89] found id: ""
	I0510 19:30:46.373736  459056 logs.go:282] 0 containers: []
	W0510 19:30:46.373748  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:46.373761  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:46.373777  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:46.425713  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:46.425764  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:46.441565  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:46.441602  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:46.517861  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:46.517897  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:46.517918  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:46.601755  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:46.601807  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:49.147704  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:49.165325  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:49.165397  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:49.206272  459056 cri.go:89] found id: ""
	I0510 19:30:49.206309  459056 logs.go:282] 0 containers: []
	W0510 19:30:49.206318  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:49.206324  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:49.206385  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:49.241832  459056 cri.go:89] found id: ""
	I0510 19:30:49.241863  459056 logs.go:282] 0 containers: []
	W0510 19:30:49.241871  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:49.241878  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:49.241958  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:49.280474  459056 cri.go:89] found id: ""
	I0510 19:30:49.280505  459056 logs.go:282] 0 containers: []
	W0510 19:30:49.280514  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:49.280520  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:49.280577  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:49.317656  459056 cri.go:89] found id: ""
	I0510 19:30:49.317687  459056 logs.go:282] 0 containers: []
	W0510 19:30:49.317699  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:49.317718  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:49.317789  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:49.356059  459056 cri.go:89] found id: ""
	I0510 19:30:49.356094  459056 logs.go:282] 0 containers: []
	W0510 19:30:49.356102  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:49.356112  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:49.356169  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:49.396831  459056 cri.go:89] found id: ""
	I0510 19:30:49.396864  459056 logs.go:282] 0 containers: []
	W0510 19:30:49.396877  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:49.396885  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:49.396954  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:49.433301  459056 cri.go:89] found id: ""
	I0510 19:30:49.433328  459056 logs.go:282] 0 containers: []
	W0510 19:30:49.433336  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:49.433342  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:49.433416  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:49.470642  459056 cri.go:89] found id: ""
	I0510 19:30:49.470674  459056 logs.go:282] 0 containers: []
	W0510 19:30:49.470686  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:49.470698  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:49.470715  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:49.520867  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:49.520910  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:49.536370  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:49.536406  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:49.608860  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:49.608894  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:49.608913  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:49.687344  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:49.687395  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:52.231133  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:52.248456  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:52.248550  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:52.288902  459056 cri.go:89] found id: ""
	I0510 19:30:52.288960  459056 logs.go:282] 0 containers: []
	W0510 19:30:52.288973  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:52.288982  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:52.289062  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:52.326578  459056 cri.go:89] found id: ""
	I0510 19:30:52.326611  459056 logs.go:282] 0 containers: []
	W0510 19:30:52.326626  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:52.326634  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:52.326713  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:52.368627  459056 cri.go:89] found id: ""
	I0510 19:30:52.368657  459056 logs.go:282] 0 containers: []
	W0510 19:30:52.368666  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:52.368672  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:52.368754  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:52.406483  459056 cri.go:89] found id: ""
	I0510 19:30:52.406518  459056 logs.go:282] 0 containers: []
	W0510 19:30:52.406526  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:52.406533  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:52.406599  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:52.445770  459056 cri.go:89] found id: ""
	I0510 19:30:52.445805  459056 logs.go:282] 0 containers: []
	W0510 19:30:52.445816  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:52.445826  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:52.445898  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:52.484279  459056 cri.go:89] found id: ""
	I0510 19:30:52.484315  459056 logs.go:282] 0 containers: []
	W0510 19:30:52.484325  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:52.484332  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:52.484395  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:52.523564  459056 cri.go:89] found id: ""
	I0510 19:30:52.523601  459056 logs.go:282] 0 containers: []
	W0510 19:30:52.523628  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:52.523634  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:52.523701  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:52.566712  459056 cri.go:89] found id: ""
	I0510 19:30:52.566747  459056 logs.go:282] 0 containers: []
	W0510 19:30:52.566756  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:52.566768  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:52.566784  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:52.618210  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:52.618263  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:52.635481  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:52.635518  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:52.710370  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:52.710415  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:52.710435  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:52.789902  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:52.789960  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:55.334697  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:55.351738  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:55.351815  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:55.387464  459056 cri.go:89] found id: ""
	I0510 19:30:55.387493  459056 logs.go:282] 0 containers: []
	W0510 19:30:55.387503  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:55.387512  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:55.387578  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:55.424565  459056 cri.go:89] found id: ""
	I0510 19:30:55.424597  459056 logs.go:282] 0 containers: []
	W0510 19:30:55.424608  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:55.424617  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:55.424690  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:55.461558  459056 cri.go:89] found id: ""
	I0510 19:30:55.461597  459056 logs.go:282] 0 containers: []
	W0510 19:30:55.461608  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:55.461616  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:55.461689  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:55.500713  459056 cri.go:89] found id: ""
	I0510 19:30:55.500742  459056 logs.go:282] 0 containers: []
	W0510 19:30:55.500756  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:55.500763  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:55.500826  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:55.536104  459056 cri.go:89] found id: ""
	I0510 19:30:55.536132  459056 logs.go:282] 0 containers: []
	W0510 19:30:55.536141  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:55.536147  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:55.536206  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:55.571895  459056 cri.go:89] found id: ""
	I0510 19:30:55.571924  459056 logs.go:282] 0 containers: []
	W0510 19:30:55.571932  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:55.571938  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:55.571996  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:55.610794  459056 cri.go:89] found id: ""
	I0510 19:30:55.610822  459056 logs.go:282] 0 containers: []
	W0510 19:30:55.610831  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:55.610837  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:55.610904  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:55.647514  459056 cri.go:89] found id: ""
	I0510 19:30:55.647544  459056 logs.go:282] 0 containers: []
	W0510 19:30:55.647554  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:55.647563  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:55.647578  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:55.697745  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:55.697788  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:55.714126  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:55.714161  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:55.786711  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:55.786735  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:55.786749  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:55.863002  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:55.863049  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:58.428393  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:58.446138  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:58.446216  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:58.482821  459056 cri.go:89] found id: ""
	I0510 19:30:58.482856  459056 logs.go:282] 0 containers: []
	W0510 19:30:58.482872  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:58.482880  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:58.482939  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:58.524325  459056 cri.go:89] found id: ""
	I0510 19:30:58.524358  459056 logs.go:282] 0 containers: []
	W0510 19:30:58.524369  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:58.524377  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:58.524433  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:58.564327  459056 cri.go:89] found id: ""
	I0510 19:30:58.564366  459056 logs.go:282] 0 containers: []
	W0510 19:30:58.564377  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:58.564383  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:58.564439  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:58.602937  459056 cri.go:89] found id: ""
	I0510 19:30:58.602966  459056 logs.go:282] 0 containers: []
	W0510 19:30:58.602974  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:58.602981  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:58.603038  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:58.639820  459056 cri.go:89] found id: ""
	I0510 19:30:58.639852  459056 logs.go:282] 0 containers: []
	W0510 19:30:58.639863  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:58.639871  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:58.639963  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:58.676466  459056 cri.go:89] found id: ""
	I0510 19:30:58.676503  459056 logs.go:282] 0 containers: []
	W0510 19:30:58.676515  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:58.676524  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:58.676593  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:58.712669  459056 cri.go:89] found id: ""
	I0510 19:30:58.712706  459056 logs.go:282] 0 containers: []
	W0510 19:30:58.712715  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:58.712721  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:58.712797  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:58.748436  459056 cri.go:89] found id: ""
	I0510 19:30:58.748474  459056 logs.go:282] 0 containers: []
	W0510 19:30:58.748485  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:58.748496  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:58.748513  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:58.801263  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:58.801311  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:58.816908  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:58.816945  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:58.890881  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:58.890912  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:58.890932  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:58.969061  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:58.969113  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:01.513933  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:01.531492  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:01.531565  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:01.568296  459056 cri.go:89] found id: ""
	I0510 19:31:01.568324  459056 logs.go:282] 0 containers: []
	W0510 19:31:01.568333  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:01.568340  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:01.568396  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:01.610372  459056 cri.go:89] found id: ""
	I0510 19:31:01.610406  459056 logs.go:282] 0 containers: []
	W0510 19:31:01.610415  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:01.610421  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:01.610485  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:01.648652  459056 cri.go:89] found id: ""
	I0510 19:31:01.648682  459056 logs.go:282] 0 containers: []
	W0510 19:31:01.648690  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:01.648696  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:01.648751  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:01.686551  459056 cri.go:89] found id: ""
	I0510 19:31:01.686583  459056 logs.go:282] 0 containers: []
	W0510 19:31:01.686595  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:01.686604  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:01.686694  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:01.724202  459056 cri.go:89] found id: ""
	I0510 19:31:01.724243  459056 logs.go:282] 0 containers: []
	W0510 19:31:01.724255  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:01.724261  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:01.724337  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:01.763500  459056 cri.go:89] found id: ""
	I0510 19:31:01.763534  459056 logs.go:282] 0 containers: []
	W0510 19:31:01.763544  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:01.763550  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:01.763629  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:01.808280  459056 cri.go:89] found id: ""
	I0510 19:31:01.808312  459056 logs.go:282] 0 containers: []
	W0510 19:31:01.808324  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:01.808332  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:01.808403  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:01.843980  459056 cri.go:89] found id: ""
	I0510 19:31:01.844018  459056 logs.go:282] 0 containers: []
	W0510 19:31:01.844031  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:01.844044  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:01.844061  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:01.907482  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:01.907521  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:01.922645  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:01.922683  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:01.999977  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:02.000009  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:02.000031  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:02.078872  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:02.078920  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:04.624201  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:04.641739  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:04.641818  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:04.680796  459056 cri.go:89] found id: ""
	I0510 19:31:04.680825  459056 logs.go:282] 0 containers: []
	W0510 19:31:04.680833  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:04.680839  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:04.680893  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:04.718840  459056 cri.go:89] found id: ""
	I0510 19:31:04.718867  459056 logs.go:282] 0 containers: []
	W0510 19:31:04.718874  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:04.718880  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:04.718943  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:04.753687  459056 cri.go:89] found id: ""
	I0510 19:31:04.753726  459056 logs.go:282] 0 containers: []
	W0510 19:31:04.753737  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:04.753745  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:04.753815  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:04.790863  459056 cri.go:89] found id: ""
	I0510 19:31:04.790893  459056 logs.go:282] 0 containers: []
	W0510 19:31:04.790903  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:04.790910  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:04.790969  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:04.828293  459056 cri.go:89] found id: ""
	I0510 19:31:04.828321  459056 logs.go:282] 0 containers: []
	W0510 19:31:04.828329  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:04.828335  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:04.828400  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:04.865914  459056 cri.go:89] found id: ""
	I0510 19:31:04.865955  459056 logs.go:282] 0 containers: []
	W0510 19:31:04.865964  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:04.865970  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:04.866025  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:04.902834  459056 cri.go:89] found id: ""
	I0510 19:31:04.902866  459056 logs.go:282] 0 containers: []
	W0510 19:31:04.902879  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:04.902888  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:04.902960  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:04.939660  459056 cri.go:89] found id: ""
	I0510 19:31:04.939694  459056 logs.go:282] 0 containers: []
	W0510 19:31:04.939702  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:04.939711  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:04.939729  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:04.954569  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:04.954608  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:05.026998  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:05.027024  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:05.027041  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:05.111468  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:05.111520  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:05.155909  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:05.155953  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:07.709153  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:07.726572  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:07.726671  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:07.766663  459056 cri.go:89] found id: ""
	I0510 19:31:07.766691  459056 logs.go:282] 0 containers: []
	W0510 19:31:07.766703  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:07.766712  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:07.766909  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:07.806853  459056 cri.go:89] found id: ""
	I0510 19:31:07.806902  459056 logs.go:282] 0 containers: []
	W0510 19:31:07.806911  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:07.806917  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:07.806985  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:07.845188  459056 cri.go:89] found id: ""
	I0510 19:31:07.845218  459056 logs.go:282] 0 containers: []
	W0510 19:31:07.845227  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:07.845233  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:07.845291  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:07.884790  459056 cri.go:89] found id: ""
	I0510 19:31:07.884827  459056 logs.go:282] 0 containers: []
	W0510 19:31:07.884840  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:07.884847  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:07.884919  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:07.924161  459056 cri.go:89] found id: ""
	I0510 19:31:07.924195  459056 logs.go:282] 0 containers: []
	W0510 19:31:07.924206  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:07.924222  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:07.924288  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:07.962697  459056 cri.go:89] found id: ""
	I0510 19:31:07.962724  459056 logs.go:282] 0 containers: []
	W0510 19:31:07.962735  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:07.962744  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:07.962840  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:08.001266  459056 cri.go:89] found id: ""
	I0510 19:31:08.001306  459056 logs.go:282] 0 containers: []
	W0510 19:31:08.001318  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:08.001326  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:08.001418  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:08.040211  459056 cri.go:89] found id: ""
	I0510 19:31:08.040238  459056 logs.go:282] 0 containers: []
	W0510 19:31:08.040247  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:08.040255  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:08.040272  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:08.114738  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:08.114784  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:08.114802  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:08.188677  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:08.188725  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:08.232875  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:08.232908  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:08.293039  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:08.293095  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:10.811640  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:10.828942  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:10.829017  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:10.866960  459056 cri.go:89] found id: ""
	I0510 19:31:10.866993  459056 logs.go:282] 0 containers: []
	W0510 19:31:10.867003  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:10.867009  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:10.867066  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:10.906391  459056 cri.go:89] found id: ""
	I0510 19:31:10.906421  459056 logs.go:282] 0 containers: []
	W0510 19:31:10.906430  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:10.906436  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:10.906503  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:10.947062  459056 cri.go:89] found id: ""
	I0510 19:31:10.947091  459056 logs.go:282] 0 containers: []
	W0510 19:31:10.947100  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:10.947106  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:10.947172  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:10.984506  459056 cri.go:89] found id: ""
	I0510 19:31:10.984535  459056 logs.go:282] 0 containers: []
	W0510 19:31:10.984543  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:10.984549  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:10.984613  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:11.022676  459056 cri.go:89] found id: ""
	I0510 19:31:11.022715  459056 logs.go:282] 0 containers: []
	W0510 19:31:11.022724  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:11.022730  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:11.022805  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:11.067215  459056 cri.go:89] found id: ""
	I0510 19:31:11.067260  459056 logs.go:282] 0 containers: []
	W0510 19:31:11.067273  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:11.067282  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:11.067344  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:11.106883  459056 cri.go:89] found id: ""
	I0510 19:31:11.106912  459056 logs.go:282] 0 containers: []
	W0510 19:31:11.106920  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:11.106926  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:11.106984  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:11.148375  459056 cri.go:89] found id: ""
	I0510 19:31:11.148408  459056 logs.go:282] 0 containers: []
	W0510 19:31:11.148416  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:11.148426  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:11.148441  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:11.199507  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:11.199555  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:11.215477  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:11.215509  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:11.285250  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:11.285278  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:11.285292  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:11.365666  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:11.365724  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:13.914500  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:13.931769  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:13.931843  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:13.971450  459056 cri.go:89] found id: ""
	I0510 19:31:13.971481  459056 logs.go:282] 0 containers: []
	W0510 19:31:13.971491  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:13.971503  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:13.971585  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:14.016556  459056 cri.go:89] found id: ""
	I0510 19:31:14.016603  459056 logs.go:282] 0 containers: []
	W0510 19:31:14.016615  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:14.016624  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:14.016717  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:14.067360  459056 cri.go:89] found id: ""
	I0510 19:31:14.067395  459056 logs.go:282] 0 containers: []
	W0510 19:31:14.067406  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:14.067415  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:14.067490  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:14.115508  459056 cri.go:89] found id: ""
	I0510 19:31:14.115547  459056 logs.go:282] 0 containers: []
	W0510 19:31:14.115559  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:14.115566  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:14.115653  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:14.162589  459056 cri.go:89] found id: ""
	I0510 19:31:14.162620  459056 logs.go:282] 0 containers: []
	W0510 19:31:14.162629  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:14.162635  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:14.162720  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:14.203802  459056 cri.go:89] found id: ""
	I0510 19:31:14.203842  459056 logs.go:282] 0 containers: []
	W0510 19:31:14.203853  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:14.203861  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:14.203927  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:14.242404  459056 cri.go:89] found id: ""
	I0510 19:31:14.242440  459056 logs.go:282] 0 containers: []
	W0510 19:31:14.242449  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:14.242455  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:14.242526  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:14.279788  459056 cri.go:89] found id: ""
	I0510 19:31:14.279820  459056 logs.go:282] 0 containers: []
	W0510 19:31:14.279831  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:14.279843  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:14.279861  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:14.295706  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:14.295741  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:14.369637  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:14.369665  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:14.369684  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:14.445062  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:14.445113  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:14.488659  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:14.488692  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:17.042803  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:17.060263  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:17.060348  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:17.098561  459056 cri.go:89] found id: ""
	I0510 19:31:17.098588  459056 logs.go:282] 0 containers: []
	W0510 19:31:17.098597  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:17.098602  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:17.098666  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:17.136124  459056 cri.go:89] found id: ""
	I0510 19:31:17.136155  459056 logs.go:282] 0 containers: []
	W0510 19:31:17.136163  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:17.136169  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:17.136226  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:17.174746  459056 cri.go:89] found id: ""
	I0510 19:31:17.174773  459056 logs.go:282] 0 containers: []
	W0510 19:31:17.174781  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:17.174788  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:17.174853  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:17.211764  459056 cri.go:89] found id: ""
	I0510 19:31:17.211802  459056 logs.go:282] 0 containers: []
	W0510 19:31:17.211813  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:17.211822  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:17.211893  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:17.250173  459056 cri.go:89] found id: ""
	I0510 19:31:17.250220  459056 logs.go:282] 0 containers: []
	W0510 19:31:17.250231  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:17.250240  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:17.250307  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:17.288067  459056 cri.go:89] found id: ""
	I0510 19:31:17.288098  459056 logs.go:282] 0 containers: []
	W0510 19:31:17.288106  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:17.288113  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:17.288167  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:17.332174  459056 cri.go:89] found id: ""
	I0510 19:31:17.332201  459056 logs.go:282] 0 containers: []
	W0510 19:31:17.332210  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:17.332215  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:17.332279  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:17.368361  459056 cri.go:89] found id: ""
	I0510 19:31:17.368393  459056 logs.go:282] 0 containers: []
	W0510 19:31:17.368401  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:17.368414  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:17.368431  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:17.419140  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:17.419188  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:17.435060  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:17.435092  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:17.503946  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:17.503971  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:17.503985  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:17.577584  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:17.577636  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:20.122561  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:20.140245  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:20.140318  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:20.176963  459056 cri.go:89] found id: ""
	I0510 19:31:20.176997  459056 logs.go:282] 0 containers: []
	W0510 19:31:20.177006  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:20.177014  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:20.177082  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:20.214648  459056 cri.go:89] found id: ""
	I0510 19:31:20.214686  459056 logs.go:282] 0 containers: []
	W0510 19:31:20.214694  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:20.214700  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:20.214756  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:20.252572  459056 cri.go:89] found id: ""
	I0510 19:31:20.252603  459056 logs.go:282] 0 containers: []
	W0510 19:31:20.252610  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:20.252616  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:20.252690  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:20.292626  459056 cri.go:89] found id: ""
	I0510 19:31:20.292658  459056 logs.go:282] 0 containers: []
	W0510 19:31:20.292667  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:20.292673  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:20.292731  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:20.331394  459056 cri.go:89] found id: ""
	I0510 19:31:20.331426  459056 logs.go:282] 0 containers: []
	W0510 19:31:20.331433  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:20.331440  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:20.331493  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:20.369499  459056 cri.go:89] found id: ""
	I0510 19:31:20.369526  459056 logs.go:282] 0 containers: []
	W0510 19:31:20.369534  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:20.369541  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:20.369598  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:20.409063  459056 cri.go:89] found id: ""
	I0510 19:31:20.409101  459056 logs.go:282] 0 containers: []
	W0510 19:31:20.409119  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:20.409129  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:20.409202  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:20.448127  459056 cri.go:89] found id: ""
	I0510 19:31:20.448165  459056 logs.go:282] 0 containers: []
	W0510 19:31:20.448176  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:20.448192  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:20.448217  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:20.529717  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:20.529761  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:20.572287  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:20.572324  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:20.622908  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:20.622953  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:20.638966  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:20.639001  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:20.710197  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:23.211978  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:23.228993  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:23.229066  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:23.266521  459056 cri.go:89] found id: ""
	I0510 19:31:23.266554  459056 logs.go:282] 0 containers: []
	W0510 19:31:23.266563  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:23.266570  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:23.266624  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:23.305315  459056 cri.go:89] found id: ""
	I0510 19:31:23.305348  459056 logs.go:282] 0 containers: []
	W0510 19:31:23.305362  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:23.305371  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:23.305428  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:23.353734  459056 cri.go:89] found id: ""
	I0510 19:31:23.353764  459056 logs.go:282] 0 containers: []
	W0510 19:31:23.353773  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:23.353779  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:23.353836  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:23.392351  459056 cri.go:89] found id: ""
	I0510 19:31:23.392389  459056 logs.go:282] 0 containers: []
	W0510 19:31:23.392400  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:23.392408  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:23.392481  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:23.432302  459056 cri.go:89] found id: ""
	I0510 19:31:23.432338  459056 logs.go:282] 0 containers: []
	W0510 19:31:23.432349  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:23.432357  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:23.432423  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:23.470143  459056 cri.go:89] found id: ""
	I0510 19:31:23.470171  459056 logs.go:282] 0 containers: []
	W0510 19:31:23.470178  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:23.470184  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:23.470240  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:23.510123  459056 cri.go:89] found id: ""
	I0510 19:31:23.510151  459056 logs.go:282] 0 containers: []
	W0510 19:31:23.510158  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:23.510164  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:23.510218  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:23.548111  459056 cri.go:89] found id: ""
	I0510 19:31:23.548146  459056 logs.go:282] 0 containers: []
	W0510 19:31:23.548155  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:23.548165  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:23.548177  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:23.592214  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:23.592252  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:23.644384  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:23.644431  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:23.660004  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:23.660050  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:23.737601  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:23.737630  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:23.737646  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:26.318790  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:26.335345  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:26.335418  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:26.374890  459056 cri.go:89] found id: ""
	I0510 19:31:26.374925  459056 logs.go:282] 0 containers: []
	W0510 19:31:26.374939  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:26.374949  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:26.375022  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:26.416223  459056 cri.go:89] found id: ""
	I0510 19:31:26.416256  459056 logs.go:282] 0 containers: []
	W0510 19:31:26.416269  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:26.416279  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:26.416360  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:26.455431  459056 cri.go:89] found id: ""
	I0510 19:31:26.455472  459056 logs.go:282] 0 containers: []
	W0510 19:31:26.455485  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:26.455493  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:26.455563  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:26.493542  459056 cri.go:89] found id: ""
	I0510 19:31:26.493569  459056 logs.go:282] 0 containers: []
	W0510 19:31:26.493579  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:26.493588  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:26.493657  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:26.536613  459056 cri.go:89] found id: ""
	I0510 19:31:26.536642  459056 logs.go:282] 0 containers: []
	W0510 19:31:26.536651  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:26.536657  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:26.536742  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:26.574555  459056 cri.go:89] found id: ""
	I0510 19:31:26.574589  459056 logs.go:282] 0 containers: []
	W0510 19:31:26.574601  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:26.574610  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:26.574686  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:26.615726  459056 cri.go:89] found id: ""
	I0510 19:31:26.615767  459056 logs.go:282] 0 containers: []
	W0510 19:31:26.615779  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:26.615794  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:26.616130  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:26.658332  459056 cri.go:89] found id: ""
	I0510 19:31:26.658364  459056 logs.go:282] 0 containers: []
	W0510 19:31:26.658373  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:26.658382  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:26.658397  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:26.714050  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:26.714103  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:26.729247  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:26.729283  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:26.802056  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:26.802098  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:26.802117  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:26.880723  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:26.880777  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:29.424963  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:29.442400  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:29.442471  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:29.480974  459056 cri.go:89] found id: ""
	I0510 19:31:29.481014  459056 logs.go:282] 0 containers: []
	W0510 19:31:29.481025  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:29.481032  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:29.481103  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:29.517132  459056 cri.go:89] found id: ""
	I0510 19:31:29.517178  459056 logs.go:282] 0 containers: []
	W0510 19:31:29.517190  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:29.517199  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:29.517271  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:29.555573  459056 cri.go:89] found id: ""
	I0510 19:31:29.555610  459056 logs.go:282] 0 containers: []
	W0510 19:31:29.555621  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:29.555629  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:29.555706  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:29.591136  459056 cri.go:89] found id: ""
	I0510 19:31:29.591168  459056 logs.go:282] 0 containers: []
	W0510 19:31:29.591175  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:29.591181  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:29.591249  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:29.629174  459056 cri.go:89] found id: ""
	I0510 19:31:29.629205  459056 logs.go:282] 0 containers: []
	W0510 19:31:29.629214  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:29.629220  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:29.629285  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:29.666035  459056 cri.go:89] found id: ""
	I0510 19:31:29.666067  459056 logs.go:282] 0 containers: []
	W0510 19:31:29.666075  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:29.666081  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:29.666140  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:29.705842  459056 cri.go:89] found id: ""
	I0510 19:31:29.705872  459056 logs.go:282] 0 containers: []
	W0510 19:31:29.705880  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:29.705886  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:29.705964  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:29.743559  459056 cri.go:89] found id: ""
	I0510 19:31:29.743592  459056 logs.go:282] 0 containers: []
	W0510 19:31:29.743600  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:29.743623  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:29.743637  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:29.792453  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:29.792499  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:29.807725  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:29.807765  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:29.881784  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:29.881812  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:29.881825  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:29.954965  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:29.955014  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:32.502586  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:32.520169  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:32.520239  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:32.557308  459056 cri.go:89] found id: ""
	I0510 19:31:32.557342  459056 logs.go:282] 0 containers: []
	W0510 19:31:32.557350  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:32.557356  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:32.557411  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:32.595792  459056 cri.go:89] found id: ""
	I0510 19:31:32.595822  459056 logs.go:282] 0 containers: []
	W0510 19:31:32.595830  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:32.595835  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:32.595891  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:32.634389  459056 cri.go:89] found id: ""
	I0510 19:31:32.634429  459056 logs.go:282] 0 containers: []
	W0510 19:31:32.634437  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:32.634443  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:32.634517  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:32.675925  459056 cri.go:89] found id: ""
	I0510 19:31:32.675957  459056 logs.go:282] 0 containers: []
	W0510 19:31:32.675966  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:32.675973  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:32.676027  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:32.712730  459056 cri.go:89] found id: ""
	I0510 19:31:32.712767  459056 logs.go:282] 0 containers: []
	W0510 19:31:32.712776  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:32.712782  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:32.712843  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:32.749733  459056 cri.go:89] found id: ""
	I0510 19:31:32.749765  459056 logs.go:282] 0 containers: []
	W0510 19:31:32.749774  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:32.749781  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:32.749841  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:32.789481  459056 cri.go:89] found id: ""
	I0510 19:31:32.789513  459056 logs.go:282] 0 containers: []
	W0510 19:31:32.789521  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:32.789527  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:32.789586  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:32.828742  459056 cri.go:89] found id: ""
	I0510 19:31:32.828779  459056 logs.go:282] 0 containers: []
	W0510 19:31:32.828788  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:32.828798  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:32.828822  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:32.843753  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:32.843787  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:32.912953  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:32.912982  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:32.912995  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:32.989726  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:32.989770  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:33.040906  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:33.040943  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:35.593878  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:35.612402  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:35.612506  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:35.651532  459056 cri.go:89] found id: ""
	I0510 19:31:35.651562  459056 logs.go:282] 0 containers: []
	W0510 19:31:35.651571  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:35.651579  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:35.651671  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:35.689499  459056 cri.go:89] found id: ""
	I0510 19:31:35.689530  459056 logs.go:282] 0 containers: []
	W0510 19:31:35.689539  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:35.689546  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:35.689611  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:35.729195  459056 cri.go:89] found id: ""
	I0510 19:31:35.729230  459056 logs.go:282] 0 containers: []
	W0510 19:31:35.729239  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:35.729245  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:35.729314  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:35.767099  459056 cri.go:89] found id: ""
	I0510 19:31:35.767133  459056 logs.go:282] 0 containers: []
	W0510 19:31:35.767146  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:35.767151  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:35.767208  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:35.808130  459056 cri.go:89] found id: ""
	I0510 19:31:35.808166  459056 logs.go:282] 0 containers: []
	W0510 19:31:35.808179  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:35.808187  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:35.808261  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:35.845791  459056 cri.go:89] found id: ""
	I0510 19:31:35.845824  459056 logs.go:282] 0 containers: []
	W0510 19:31:35.845834  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:35.845841  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:35.846005  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:35.884049  459056 cri.go:89] found id: ""
	I0510 19:31:35.884083  459056 logs.go:282] 0 containers: []
	W0510 19:31:35.884093  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:35.884101  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:35.884182  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:35.921358  459056 cri.go:89] found id: ""
	I0510 19:31:35.921405  459056 logs.go:282] 0 containers: []
	W0510 19:31:35.921438  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:35.921454  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:35.921471  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:35.975819  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:35.975866  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:35.991683  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:35.991719  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:36.062576  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:36.062609  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:36.062692  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:36.144124  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:36.144171  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:38.688627  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:38.706961  459056 kubeadm.go:593] duration metric: took 4m1.80853031s to restartPrimaryControlPlane
	W0510 19:31:38.707088  459056 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0510 19:31:38.707129  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0510 19:31:42.433199  459056 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.726037031s)
	I0510 19:31:42.433304  459056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0510 19:31:42.450520  459056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0510 19:31:42.464170  459056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0510 19:31:42.478440  459056 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0510 19:31:42.478465  459056 kubeadm.go:157] found existing configuration files:
	
	I0510 19:31:42.478527  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0510 19:31:42.490756  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0510 19:31:42.490825  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0510 19:31:42.503476  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0510 19:31:42.516078  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0510 19:31:42.516162  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0510 19:31:42.529093  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0510 19:31:42.541784  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0510 19:31:42.541857  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0510 19:31:42.554154  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0510 19:31:42.566298  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0510 19:31:42.566366  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0510 19:31:42.579144  459056 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0510 19:31:42.808604  459056 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0510 19:33:39.237462  459056 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0510 19:33:39.237653  459056 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0510 19:33:39.240214  459056 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0510 19:33:39.240284  459056 kubeadm.go:310] [preflight] Running pre-flight checks
	I0510 19:33:39.240378  459056 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0510 19:33:39.240505  459056 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0510 19:33:39.240669  459056 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0510 19:33:39.240726  459056 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0510 19:33:39.242836  459056 out.go:235]   - Generating certificates and keys ...
	I0510 19:33:39.242931  459056 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0510 19:33:39.243010  459056 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0510 19:33:39.243103  459056 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0510 19:33:39.243180  459056 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0510 19:33:39.243286  459056 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0510 19:33:39.243366  459056 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0510 19:33:39.243440  459056 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0510 19:33:39.243544  459056 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0510 19:33:39.243662  459056 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0510 19:33:39.243769  459056 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0510 19:33:39.243830  459056 kubeadm.go:310] [certs] Using the existing "sa" key
	I0510 19:33:39.243905  459056 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0510 19:33:39.243972  459056 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0510 19:33:39.244018  459056 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0510 19:33:39.244072  459056 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0510 19:33:39.244132  459056 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0510 19:33:39.244227  459056 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0510 19:33:39.244322  459056 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0510 19:33:39.244375  459056 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0510 19:33:39.244459  459056 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0510 19:33:39.246586  459056 out.go:235]   - Booting up control plane ...
	I0510 19:33:39.246698  459056 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0510 19:33:39.246800  459056 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0510 19:33:39.246872  459056 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0510 19:33:39.246943  459056 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0510 19:33:39.247151  459056 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0510 19:33:39.247198  459056 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0510 19:33:39.247270  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:33:39.247423  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:33:39.247478  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:33:39.247671  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:33:39.247748  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:33:39.247894  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:33:39.247981  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:33:39.248179  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:33:39.248247  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:33:39.248415  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:33:39.248423  459056 kubeadm.go:310] 
	I0510 19:33:39.248461  459056 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0510 19:33:39.248497  459056 kubeadm.go:310] 		timed out waiting for the condition
	I0510 19:33:39.248507  459056 kubeadm.go:310] 
	I0510 19:33:39.248540  459056 kubeadm.go:310] 	This error is likely caused by:
	I0510 19:33:39.248570  459056 kubeadm.go:310] 		- The kubelet is not running
	I0510 19:33:39.248664  459056 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0510 19:33:39.248671  459056 kubeadm.go:310] 
	I0510 19:33:39.248767  459056 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0510 19:33:39.248803  459056 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0510 19:33:39.248832  459056 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0510 19:33:39.248839  459056 kubeadm.go:310] 
	I0510 19:33:39.248927  459056 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0510 19:33:39.249007  459056 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0510 19:33:39.249014  459056 kubeadm.go:310] 
	I0510 19:33:39.249164  459056 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0510 19:33:39.249288  459056 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0510 19:33:39.249351  459056 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0510 19:33:39.249408  459056 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0510 19:33:39.249484  459056 kubeadm.go:310] 
	W0510 19:33:39.249624  459056 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0510 19:33:39.249703  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0510 19:33:39.710770  459056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0510 19:33:39.729461  459056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0510 19:33:39.741531  459056 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0510 19:33:39.741556  459056 kubeadm.go:157] found existing configuration files:
	
	I0510 19:33:39.741617  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0510 19:33:39.752271  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0510 19:33:39.752339  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0510 19:33:39.764450  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0510 19:33:39.775142  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0510 19:33:39.775203  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0510 19:33:39.787008  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0510 19:33:39.798070  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0510 19:33:39.798143  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0510 19:33:39.809980  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0510 19:33:39.821862  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0510 19:33:39.821930  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0510 19:33:39.833890  459056 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0510 19:33:40.070673  459056 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0510 19:35:36.029186  459056 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0510 19:35:36.029314  459056 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0510 19:35:36.032027  459056 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0510 19:35:36.032078  459056 kubeadm.go:310] [preflight] Running pre-flight checks
	I0510 19:35:36.032177  459056 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0510 19:35:36.032280  459056 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0510 19:35:36.032361  459056 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0510 19:35:36.032446  459056 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0510 19:35:36.034371  459056 out.go:235]   - Generating certificates and keys ...
	I0510 19:35:36.034447  459056 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0510 19:35:36.034498  459056 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0510 19:35:36.034563  459056 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0510 19:35:36.034612  459056 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0510 19:35:36.034675  459056 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0510 19:35:36.034778  459056 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0510 19:35:36.034874  459056 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0510 19:35:36.034977  459056 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0510 19:35:36.035054  459056 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0510 19:35:36.035126  459056 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0510 19:35:36.035158  459056 kubeadm.go:310] [certs] Using the existing "sa" key
	I0510 19:35:36.035206  459056 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0510 19:35:36.035286  459056 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0510 19:35:36.035370  459056 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0510 19:35:36.035434  459056 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0510 19:35:36.035501  459056 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0510 19:35:36.035658  459056 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0510 19:35:36.035738  459056 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0510 19:35:36.035795  459056 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0510 19:35:36.035884  459056 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0510 19:35:36.037686  459056 out.go:235]   - Booting up control plane ...
	I0510 19:35:36.037791  459056 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0510 19:35:36.037869  459056 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0510 19:35:36.037934  459056 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0510 19:35:36.038008  459056 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0510 19:35:36.038231  459056 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0510 19:35:36.038305  459056 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0510 19:35:36.038398  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:35:36.038630  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:35:36.038727  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:35:36.038913  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:35:36.038987  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:35:36.039203  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:35:36.039326  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:35:36.039577  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:35:36.039655  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:35:36.039818  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:35:36.039825  459056 kubeadm.go:310] 
	I0510 19:35:36.039859  459056 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0510 19:35:36.039904  459056 kubeadm.go:310] 		timed out waiting for the condition
	I0510 19:35:36.039919  459056 kubeadm.go:310] 
	I0510 19:35:36.039948  459056 kubeadm.go:310] 	This error is likely caused by:
	I0510 19:35:36.039978  459056 kubeadm.go:310] 		- The kubelet is not running
	I0510 19:35:36.040071  459056 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0510 19:35:36.040078  459056 kubeadm.go:310] 
	I0510 19:35:36.040179  459056 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0510 19:35:36.040209  459056 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0510 19:35:36.040237  459056 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0510 19:35:36.040244  459056 kubeadm.go:310] 
	I0510 19:35:36.040337  459056 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0510 19:35:36.040419  459056 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0510 19:35:36.040442  459056 kubeadm.go:310] 
	I0510 19:35:36.040555  459056 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0510 19:35:36.040655  459056 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0510 19:35:36.040766  459056 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0510 19:35:36.040836  459056 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0510 19:35:36.040862  459056 kubeadm.go:310] 
	I0510 19:35:36.040906  459056 kubeadm.go:394] duration metric: took 7m59.202425038s to StartCluster
	I0510 19:35:36.040958  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:35:36.041023  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:35:36.097650  459056 cri.go:89] found id: ""
	I0510 19:35:36.097683  459056 logs.go:282] 0 containers: []
	W0510 19:35:36.097698  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:35:36.097708  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:35:36.097773  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:35:36.142587  459056 cri.go:89] found id: ""
	I0510 19:35:36.142619  459056 logs.go:282] 0 containers: []
	W0510 19:35:36.142627  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:35:36.142633  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:35:36.142702  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:35:36.186330  459056 cri.go:89] found id: ""
	I0510 19:35:36.186361  459056 logs.go:282] 0 containers: []
	W0510 19:35:36.186370  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:35:36.186376  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:35:36.186444  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:35:36.230965  459056 cri.go:89] found id: ""
	I0510 19:35:36.230994  459056 logs.go:282] 0 containers: []
	W0510 19:35:36.231001  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:35:36.231007  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:35:36.231062  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:35:36.276491  459056 cri.go:89] found id: ""
	I0510 19:35:36.276520  459056 logs.go:282] 0 containers: []
	W0510 19:35:36.276528  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:35:36.276534  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:35:36.276598  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:35:36.321937  459056 cri.go:89] found id: ""
	I0510 19:35:36.321971  459056 logs.go:282] 0 containers: []
	W0510 19:35:36.321980  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:35:36.321987  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:35:36.322050  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:35:36.364757  459056 cri.go:89] found id: ""
	I0510 19:35:36.364797  459056 logs.go:282] 0 containers: []
	W0510 19:35:36.364809  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:35:36.364818  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:35:36.364875  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:35:36.409488  459056 cri.go:89] found id: ""
	I0510 19:35:36.409523  459056 logs.go:282] 0 containers: []
	W0510 19:35:36.409532  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:35:36.409546  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:35:36.409561  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:35:36.462665  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:35:36.462705  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:35:36.478560  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:35:36.478591  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:35:36.555871  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:35:36.555904  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:35:36.555922  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:35:36.674559  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:35:36.674603  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0510 19:35:36.723413  459056 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0510 19:35:36.723488  459056 out.go:270] * 
	W0510 19:35:36.723574  459056 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0510 19:35:36.723589  459056 out.go:270] * 
	W0510 19:35:36.724458  459056 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0510 19:35:36.727493  459056 out.go:201] 
	W0510 19:35:36.728543  459056 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0510 19:35:36.728588  459056 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0510 19:35:36.728604  459056 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0510 19:35:36.729894  459056 out.go:201] 
	
	
	==> CRI-O <==
	May 10 19:35:38 old-k8s-version-089147 crio[815]: time="2025-05-10 19:35:38.119067925Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746905738119045567,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=72519f44-79ae-40a3-8ecf-b0feac053145 name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:35:38 old-k8s-version-089147 crio[815]: time="2025-05-10 19:35:38.119772835Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=20354a5d-dac3-460f-a425-5f1e3403c60a name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:35:38 old-k8s-version-089147 crio[815]: time="2025-05-10 19:35:38.119866508Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=20354a5d-dac3-460f-a425-5f1e3403c60a name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:35:38 old-k8s-version-089147 crio[815]: time="2025-05-10 19:35:38.119901816Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=20354a5d-dac3-460f-a425-5f1e3403c60a name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:35:38 old-k8s-version-089147 crio[815]: time="2025-05-10 19:35:38.155474523Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ce16827b-8ff9-471c-99f7-5e728e7f726c name=/runtime.v1.RuntimeService/Version
	May 10 19:35:38 old-k8s-version-089147 crio[815]: time="2025-05-10 19:35:38.155543942Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ce16827b-8ff9-471c-99f7-5e728e7f726c name=/runtime.v1.RuntimeService/Version
	May 10 19:35:38 old-k8s-version-089147 crio[815]: time="2025-05-10 19:35:38.157284364Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=27f4f314-bba7-46b6-9d9f-398bf87274f5 name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:35:38 old-k8s-version-089147 crio[815]: time="2025-05-10 19:35:38.157640569Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746905738157619826,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=27f4f314-bba7-46b6-9d9f-398bf87274f5 name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:35:38 old-k8s-version-089147 crio[815]: time="2025-05-10 19:35:38.158383186Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8af73189-c498-402d-a9ac-0349a8a244a3 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:35:38 old-k8s-version-089147 crio[815]: time="2025-05-10 19:35:38.158486417Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8af73189-c498-402d-a9ac-0349a8a244a3 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:35:38 old-k8s-version-089147 crio[815]: time="2025-05-10 19:35:38.158526443Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8af73189-c498-402d-a9ac-0349a8a244a3 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:35:38 old-k8s-version-089147 crio[815]: time="2025-05-10 19:35:38.194664992Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=32496289-782c-43a3-b263-938f755eaf44 name=/runtime.v1.RuntimeService/Version
	May 10 19:35:38 old-k8s-version-089147 crio[815]: time="2025-05-10 19:35:38.194737211Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=32496289-782c-43a3-b263-938f755eaf44 name=/runtime.v1.RuntimeService/Version
	May 10 19:35:38 old-k8s-version-089147 crio[815]: time="2025-05-10 19:35:38.196266010Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bdbdf43a-701b-47ae-9ab5-779cbcea9a54 name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:35:38 old-k8s-version-089147 crio[815]: time="2025-05-10 19:35:38.196629062Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746905738196606900,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bdbdf43a-701b-47ae-9ab5-779cbcea9a54 name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:35:38 old-k8s-version-089147 crio[815]: time="2025-05-10 19:35:38.197288803Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1afe37d9-8382-49d8-b3f1-ce0a55c09325 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:35:38 old-k8s-version-089147 crio[815]: time="2025-05-10 19:35:38.197339716Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1afe37d9-8382-49d8-b3f1-ce0a55c09325 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:35:38 old-k8s-version-089147 crio[815]: time="2025-05-10 19:35:38.197367914Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1afe37d9-8382-49d8-b3f1-ce0a55c09325 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:35:38 old-k8s-version-089147 crio[815]: time="2025-05-10 19:35:38.231909044Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1119386b-9a80-4e01-8bd6-dc0a022d11ea name=/runtime.v1.RuntimeService/Version
	May 10 19:35:38 old-k8s-version-089147 crio[815]: time="2025-05-10 19:35:38.231977886Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1119386b-9a80-4e01-8bd6-dc0a022d11ea name=/runtime.v1.RuntimeService/Version
	May 10 19:35:38 old-k8s-version-089147 crio[815]: time="2025-05-10 19:35:38.233232236Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9fb7fc90-280b-49bc-9af5-1ec3a6262753 name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:35:38 old-k8s-version-089147 crio[815]: time="2025-05-10 19:35:38.233611675Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746905738233591311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9fb7fc90-280b-49bc-9af5-1ec3a6262753 name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:35:38 old-k8s-version-089147 crio[815]: time="2025-05-10 19:35:38.234342182Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=31be83d3-24d5-42da-911f-c3a98834fef7 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:35:38 old-k8s-version-089147 crio[815]: time="2025-05-10 19:35:38.234388550Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=31be83d3-24d5-42da-911f-c3a98834fef7 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:35:38 old-k8s-version-089147 crio[815]: time="2025-05-10 19:35:38.234456149Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=31be83d3-24d5-42da-911f-c3a98834fef7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[May10 19:27] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.000002] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.001401] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.001737] (rpcbind)[143]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.974355] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000007] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.102715] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.103174] kauditd_printk_skb: 74 callbacks suppressed
	[ +14.627732] kauditd_printk_skb: 46 callbacks suppressed
	[May10 19:33] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:35:38 up 8 min,  0 user,  load average: 0.08, 0.13, 0.09
	Linux old-k8s-version-089147 5.10.207 #1 SMP Fri May 9 03:49:24 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2024.11.2"
	
	
	==> kubelet <==
	May 10 19:35:36 old-k8s-version-089147 kubelet[6837]: goroutine 159 [chan receive]:
	May 10 19:35:36 old-k8s-version-089147 kubelet[6837]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001000c0, 0xc0009db680)
	May 10 19:35:36 old-k8s-version-089147 kubelet[6837]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	May 10 19:35:36 old-k8s-version-089147 kubelet[6837]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	May 10 19:35:36 old-k8s-version-089147 kubelet[6837]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	May 10 19:35:36 old-k8s-version-089147 kubelet[6837]: goroutine 160 [select]:
	May 10 19:35:36 old-k8s-version-089147 kubelet[6837]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000a45ef0, 0x4f0ac20, 0xc000499a90, 0x1, 0xc0001000c0)
	May 10 19:35:36 old-k8s-version-089147 kubelet[6837]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	May 10 19:35:36 old-k8s-version-089147 kubelet[6837]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000916c40, 0xc0001000c0)
	May 10 19:35:36 old-k8s-version-089147 kubelet[6837]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	May 10 19:35:36 old-k8s-version-089147 kubelet[6837]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	May 10 19:35:36 old-k8s-version-089147 kubelet[6837]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	May 10 19:35:36 old-k8s-version-089147 kubelet[6837]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000bb0720, 0xc000b3dd00)
	May 10 19:35:36 old-k8s-version-089147 kubelet[6837]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	May 10 19:35:36 old-k8s-version-089147 kubelet[6837]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	May 10 19:35:36 old-k8s-version-089147 kubelet[6837]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	May 10 19:35:36 old-k8s-version-089147 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	May 10 19:35:36 old-k8s-version-089147 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	May 10 19:35:37 old-k8s-version-089147 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	May 10 19:35:37 old-k8s-version-089147 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	May 10 19:35:37 old-k8s-version-089147 kubelet[6920]: I0510 19:35:37.376431    6920 server.go:416] Version: v1.20.0
	May 10 19:35:37 old-k8s-version-089147 kubelet[6920]: I0510 19:35:37.377002    6920 server.go:837] Client rotation is on, will bootstrap in background
	May 10 19:35:37 old-k8s-version-089147 kubelet[6920]: I0510 19:35:37.379245    6920 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	May 10 19:35:37 old-k8s-version-089147 kubelet[6920]: I0510 19:35:37.380318    6920 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	May 10 19:35:37 old-k8s-version-089147 kubelet[6920]: W0510 19:35:37.380423    6920 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-089147 -n old-k8s-version-089147
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-089147 -n old-k8s-version-089147: exit status 2 (257.277356ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-089147" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (510.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:36:01.749935  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/enable-default-cni-380533/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:36:10.805392  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/flannel-380533/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:36:34.404653  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/bridge-380533/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:38:22.327707  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/no-preload-433152/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:38:26.360178  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:38:30.657263  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/default-k8s-diff-port-544623/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:38:37.126617  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kindnet-380533/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:38:48.489096  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:39:20.898432  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:39:25.378687  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/calico-380533/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:39:37.810924  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:39:49.427577  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:40:00.191567  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kindnet-380533/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:40:20.402811  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/custom-flannel-380533/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:40:48.442462  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/calico-380533/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:41:01.749719  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/enable-default-cni-380533/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:41:10.805714  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/flannel-380533/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:41:34.404558  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/bridge-380533/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:41:43.469989  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/custom-flannel-380533/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:42:24.817306  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/enable-default-cni-380533/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:42:33.872674  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/flannel-380533/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:42:57.470235  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/bridge-380533/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:43:22.328098  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/no-preload-433152/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:43:26.359589  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:43:30.657125  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/default-k8s-diff-port-544623/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:43:37.127623  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kindnet-380533/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:43:48.489761  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:44:25.378679  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/calico-380533/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:44:37.810386  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-089147 -n old-k8s-version-089147
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-089147 -n old-k8s-version-089147: exit status 2 (246.165246ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "old-k8s-version-089147" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-089147 -n old-k8s-version-089147
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-089147 -n old-k8s-version-089147: exit status 2 (233.540426ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-089147 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-089147 logs -n 25: (1.05439226s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p default-k8s-diff-port-544623       | default-k8s-diff-port-544623 | jenkins | v1.35.0 | 10 May 25 19:25 UTC | 10 May 25 19:25 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-544623 | jenkins | v1.35.0 | 10 May 25 19:25 UTC | 10 May 25 19:26 UTC |
	|         | default-k8s-diff-port-544623                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-089147        | old-k8s-version-089147       | jenkins | v1.35.0 | 10 May 25 19:25 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-483140            | embed-certs-483140           | jenkins | v1.35.0 | 10 May 25 19:25 UTC | 10 May 25 19:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-483140                                  | embed-certs-483140           | jenkins | v1.35.0 | 10 May 25 19:25 UTC | 10 May 25 19:27 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | no-preload-433152 image list                           | no-preload-433152            | jenkins | v1.35.0 | 10 May 25 19:26 UTC | 10 May 25 19:26 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-433152                                   | no-preload-433152            | jenkins | v1.35.0 | 10 May 25 19:26 UTC | 10 May 25 19:26 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-433152                                   | no-preload-433152            | jenkins | v1.35.0 | 10 May 25 19:26 UTC | 10 May 25 19:26 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-433152                                   | no-preload-433152            | jenkins | v1.35.0 | 10 May 25 19:26 UTC | 10 May 25 19:26 UTC |
	| delete  | -p no-preload-433152                                   | no-preload-433152            | jenkins | v1.35.0 | 10 May 25 19:26 UTC | 10 May 25 19:26 UTC |
	| image   | default-k8s-diff-port-544623                           | default-k8s-diff-port-544623 | jenkins | v1.35.0 | 10 May 25 19:26 UTC | 10 May 25 19:26 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-544623 | jenkins | v1.35.0 | 10 May 25 19:26 UTC | 10 May 25 19:26 UTC |
	|         | default-k8s-diff-port-544623                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-544623 | jenkins | v1.35.0 | 10 May 25 19:26 UTC | 10 May 25 19:26 UTC |
	|         | default-k8s-diff-port-544623                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-544623 | jenkins | v1.35.0 | 10 May 25 19:26 UTC | 10 May 25 19:26 UTC |
	|         | default-k8s-diff-port-544623                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-544623 | jenkins | v1.35.0 | 10 May 25 19:26 UTC | 10 May 25 19:26 UTC |
	|         | default-k8s-diff-port-544623                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-089147                              | old-k8s-version-089147       | jenkins | v1.35.0 | 10 May 25 19:27 UTC | 10 May 25 19:27 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-089147             | old-k8s-version-089147       | jenkins | v1.35.0 | 10 May 25 19:27 UTC | 10 May 25 19:27 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-089147                              | old-k8s-version-089147       | jenkins | v1.35.0 | 10 May 25 19:27 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-483140                 | embed-certs-483140           | jenkins | v1.35.0 | 10 May 25 19:27 UTC | 10 May 25 19:27 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-483140                                  | embed-certs-483140           | jenkins | v1.35.0 | 10 May 25 19:27 UTC | 10 May 25 19:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0                           |                              |         |         |                     |                     |
	| image   | embed-certs-483140 image list                          | embed-certs-483140           | jenkins | v1.35.0 | 10 May 25 19:28 UTC | 10 May 25 19:28 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-483140                                  | embed-certs-483140           | jenkins | v1.35.0 | 10 May 25 19:28 UTC | 10 May 25 19:28 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-483140                                  | embed-certs-483140           | jenkins | v1.35.0 | 10 May 25 19:28 UTC | 10 May 25 19:28 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-483140                                  | embed-certs-483140           | jenkins | v1.35.0 | 10 May 25 19:28 UTC | 10 May 25 19:28 UTC |
	| delete  | -p embed-certs-483140                                  | embed-certs-483140           | jenkins | v1.35.0 | 10 May 25 19:28 UTC | 10 May 25 19:28 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/05/10 19:27:23
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0510 19:27:23.885144  459268 out.go:345] Setting OutFile to fd 1 ...
	I0510 19:27:23.885480  459268 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 19:27:23.885497  459268 out.go:358] Setting ErrFile to fd 2...
	I0510 19:27:23.885501  459268 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 19:27:23.885719  459268 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-388787/.minikube/bin
	I0510 19:27:23.886293  459268 out.go:352] Setting JSON to false
	I0510 19:27:23.887364  459268 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":32992,"bootTime":1746872252,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 19:27:23.887483  459268 start.go:140] virtualization: kvm guest
	I0510 19:27:23.889943  459268 out.go:177] * [embed-certs-483140] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 19:27:23.891957  459268 notify.go:220] Checking for updates...
	I0510 19:27:23.891994  459268 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 19:27:23.894190  459268 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 19:27:23.896124  459268 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-388787/kubeconfig
	I0510 19:27:23.897923  459268 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-388787/.minikube
	I0510 19:27:23.899523  459268 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 19:27:23.901199  459268 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 19:27:23.903392  459268 config.go:182] Loaded profile config "embed-certs-483140": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 19:27:23.904060  459268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:27:23.904180  459268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:27:23.920190  459268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45251
	I0510 19:27:23.920695  459268 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:27:23.921217  459268 main.go:141] libmachine: Using API Version  1
	I0510 19:27:23.921240  459268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:27:23.921569  459268 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:27:23.921756  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	I0510 19:27:23.922029  459268 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 19:27:23.922349  459268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:27:23.922417  459268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:27:23.938240  459268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41081
	I0510 19:27:23.938810  459268 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:27:23.939433  459268 main.go:141] libmachine: Using API Version  1
	I0510 19:27:23.939468  459268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:27:23.939903  459268 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:27:23.940145  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	I0510 19:27:23.978372  459268 out.go:177] * Using the kvm2 driver based on existing profile
	I0510 19:27:20.282773  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:20.283336  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | unable to find current IP address of domain old-k8s-version-089147 in network mk-old-k8s-version-089147
	I0510 19:27:20.283406  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:27:20.283343  459091 retry.go:31] will retry after 3.189593727s: waiting for domain to come up
	I0510 19:27:23.618741  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:23.619115  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | unable to find current IP address of domain old-k8s-version-089147 in network mk-old-k8s-version-089147
	I0510 19:27:23.619143  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:27:23.619075  459091 retry.go:31] will retry after 3.237680008s: waiting for domain to come up
	I0510 19:27:23.979818  459268 start.go:304] selected driver: kvm2
	I0510 19:27:23.979843  459268 start.go:908] validating driver "kvm2" against &{Name:embed-certs-483140 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.33.0 ClusterName:embed-certs-483140 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.231 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Su
bnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 19:27:23.979977  459268 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 19:27:23.980756  459268 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 19:27:23.980839  459268 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20720-388787/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0510 19:27:23.997236  459268 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0510 19:27:23.997883  459268 start_flags.go:975] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0510 19:27:23.997935  459268 cni.go:84] Creating CNI manager for ""
	I0510 19:27:23.998008  459268 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0510 19:27:23.998078  459268 start.go:347] cluster config:
	{Name:embed-certs-483140 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:embed-certs-483140 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.231 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 19:27:23.998238  459268 iso.go:125] acquiring lock: {Name:mk19640015999219180c6685480547adf0c02201 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 19:27:24.000161  459268 out.go:177] * Starting "embed-certs-483140" primary control-plane node in "embed-certs-483140" cluster
	I0510 19:27:24.001573  459268 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
	I0510 19:27:24.001646  459268 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4
	I0510 19:27:24.001656  459268 cache.go:56] Caching tarball of preloaded images
	I0510 19:27:24.001770  459268 preload.go:172] Found /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0510 19:27:24.001787  459268 cache.go:59] Finished verifying existence of preloaded tar for v1.33.0 on crio
	I0510 19:27:24.001913  459268 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/embed-certs-483140/config.json ...
	I0510 19:27:24.002132  459268 start.go:360] acquireMachinesLock for embed-certs-483140: {Name:mk11499d7756d503a7a24339ad1a7f9ab9dc0fab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0510 19:27:28.400997  459268 start.go:364] duration metric: took 4.398817522s to acquireMachinesLock for "embed-certs-483140"
	I0510 19:27:28.401047  459268 start.go:96] Skipping create...Using existing machine configuration
	I0510 19:27:28.401054  459268 fix.go:54] fixHost starting: 
	I0510 19:27:28.401464  459268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:27:28.401519  459268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:27:28.419712  459268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44069
	I0510 19:27:28.420231  459268 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:27:28.420865  459268 main.go:141] libmachine: Using API Version  1
	I0510 19:27:28.420897  459268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:27:28.421274  459268 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:27:28.421549  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	I0510 19:27:28.421748  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetState
	I0510 19:27:28.423533  459268 fix.go:112] recreateIfNeeded on embed-certs-483140: state=Stopped err=<nil>
	I0510 19:27:28.423563  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	W0510 19:27:28.423744  459268 fix.go:138] unexpected machine state, will restart: <nil>
	I0510 19:27:28.425472  459268 out.go:177] * Restarting existing kvm2 VM for "embed-certs-483140" ...
	I0510 19:27:28.426613  459268 main.go:141] libmachine: (embed-certs-483140) Calling .Start
	I0510 19:27:28.426810  459268 main.go:141] libmachine: (embed-certs-483140) starting domain...
	I0510 19:27:28.426829  459268 main.go:141] libmachine: (embed-certs-483140) ensuring networks are active...
	I0510 19:27:28.427619  459268 main.go:141] libmachine: (embed-certs-483140) Ensuring network default is active
	I0510 19:27:28.428029  459268 main.go:141] libmachine: (embed-certs-483140) Ensuring network mk-embed-certs-483140 is active
	I0510 19:27:28.428436  459268 main.go:141] libmachine: (embed-certs-483140) getting domain XML...
	I0510 19:27:28.429330  459268 main.go:141] libmachine: (embed-certs-483140) creating domain...
	I0510 19:27:26.860579  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:26.861169  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has current primary IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:26.861235  459056 main.go:141] libmachine: (old-k8s-version-089147) found domain IP: 192.168.50.225
	I0510 19:27:26.861263  459056 main.go:141] libmachine: (old-k8s-version-089147) reserving static IP address...
	I0510 19:27:26.861678  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "old-k8s-version-089147", mac: "52:54:00:c5:c6:86", ip: "192.168.50.225"} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:26.861748  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | skip adding static IP to network mk-old-k8s-version-089147 - found existing host DHCP lease matching {name: "old-k8s-version-089147", mac: "52:54:00:c5:c6:86", ip: "192.168.50.225"}
	I0510 19:27:26.861769  459056 main.go:141] libmachine: (old-k8s-version-089147) reserved static IP address 192.168.50.225 for domain old-k8s-version-089147
	I0510 19:27:26.861785  459056 main.go:141] libmachine: (old-k8s-version-089147) waiting for SSH...
	I0510 19:27:26.861791  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | Getting to WaitForSSH function...
	I0510 19:27:26.863716  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:26.864074  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:26.864105  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:26.864224  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | Using SSH client type: external
	I0510 19:27:26.864249  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | Using SSH private key: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/old-k8s-version-089147/id_rsa (-rw-------)
	I0510 19:27:26.864275  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.225 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20720-388787/.minikube/machines/old-k8s-version-089147/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0510 19:27:26.864284  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | About to run SSH command:
	I0510 19:27:26.864292  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | exit 0
	I0510 19:27:26.992149  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | SSH cmd err, output: <nil>: 
	I0510 19:27:26.992596  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetConfigRaw
	I0510 19:27:26.993291  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetIP
	I0510 19:27:26.996245  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:26.996734  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:26.996760  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:26.996987  459056 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/config.json ...
	I0510 19:27:26.997231  459056 machine.go:93] provisionDockerMachine start ...
	I0510 19:27:26.997257  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .DriverName
	I0510 19:27:26.997484  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:26.999968  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.000439  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:27.000476  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.000707  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:27:27.000924  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:27.001051  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:27.001195  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:27:27.001309  459056 main.go:141] libmachine: Using SSH client type: native
	I0510 19:27:27.001588  459056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.225 22 <nil> <nil>}
	I0510 19:27:27.001603  459056 main.go:141] libmachine: About to run SSH command:
	hostname
	I0510 19:27:27.120348  459056 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0510 19:27:27.120385  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetMachineName
	I0510 19:27:27.120685  459056 buildroot.go:166] provisioning hostname "old-k8s-version-089147"
	I0510 19:27:27.120712  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetMachineName
	I0510 19:27:27.120937  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:27.123906  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.124166  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:27.124192  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.124346  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:27:27.124515  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:27.124641  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:27.124770  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:27:27.124903  459056 main.go:141] libmachine: Using SSH client type: native
	I0510 19:27:27.125130  459056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.225 22 <nil> <nil>}
	I0510 19:27:27.125146  459056 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-089147 && echo "old-k8s-version-089147" | sudo tee /etc/hostname
	I0510 19:27:27.254277  459056 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-089147
	
	I0510 19:27:27.254306  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:27.257358  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.257763  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:27.257793  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.258010  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:27:27.258221  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:27.258392  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:27.258550  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:27:27.258746  459056 main.go:141] libmachine: Using SSH client type: native
	I0510 19:27:27.258987  459056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.225 22 <nil> <nil>}
	I0510 19:27:27.259004  459056 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-089147' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-089147/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-089147' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0510 19:27:27.383141  459056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0510 19:27:27.383177  459056 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20720-388787/.minikube CaCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20720-388787/.minikube}
	I0510 19:27:27.383245  459056 buildroot.go:174] setting up certificates
	I0510 19:27:27.383268  459056 provision.go:84] configureAuth start
	I0510 19:27:27.383282  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetMachineName
	I0510 19:27:27.383632  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetIP
	I0510 19:27:27.386412  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.386733  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:27.386760  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.386920  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:27.388990  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.389308  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:27.389346  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.389489  459056 provision.go:143] copyHostCerts
	I0510 19:27:27.389586  459056 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem, removing ...
	I0510 19:27:27.389611  459056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem
	I0510 19:27:27.389674  459056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem (1675 bytes)
	I0510 19:27:27.389763  459056 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem, removing ...
	I0510 19:27:27.389771  459056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem
	I0510 19:27:27.389797  459056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem (1078 bytes)
	I0510 19:27:27.389845  459056 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem, removing ...
	I0510 19:27:27.389852  459056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem
	I0510 19:27:27.389873  459056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem (1123 bytes)
	I0510 19:27:27.389917  459056 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-089147 san=[127.0.0.1 192.168.50.225 localhost minikube old-k8s-version-089147]
	I0510 19:27:27.706220  459056 provision.go:177] copyRemoteCerts
	I0510 19:27:27.706291  459056 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0510 19:27:27.706321  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:27.709279  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.709662  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:27.709704  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.709901  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:27:27.710147  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:27.710312  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:27:27.710453  459056 sshutil.go:53] new ssh client: &{IP:192.168.50.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/old-k8s-version-089147/id_rsa Username:docker}
	I0510 19:27:27.796192  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0510 19:27:27.826223  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0510 19:27:27.856165  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0510 19:27:27.885803  459056 provision.go:87] duration metric: took 502.517549ms to configureAuth
	I0510 19:27:27.885844  459056 buildroot.go:189] setting minikube options for container-runtime
	I0510 19:27:27.886049  459056 config.go:182] Loaded profile config "old-k8s-version-089147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0510 19:27:27.886126  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:27.888892  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.889274  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:27.889304  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.889432  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:27:27.889662  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:27.889842  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:27.890001  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:27:27.890137  459056 main.go:141] libmachine: Using SSH client type: native
	I0510 19:27:27.890398  459056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.225 22 <nil> <nil>}
	I0510 19:27:27.890414  459056 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0510 19:27:28.145754  459056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0510 19:27:28.145780  459056 machine.go:96] duration metric: took 1.148533327s to provisionDockerMachine
	I0510 19:27:28.145793  459056 start.go:293] postStartSetup for "old-k8s-version-089147" (driver="kvm2")
	I0510 19:27:28.145805  459056 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0510 19:27:28.145843  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .DriverName
	I0510 19:27:28.146213  459056 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0510 19:27:28.146241  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:28.148935  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.149310  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:28.149338  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.149442  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:27:28.149630  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:28.149794  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:27:28.149969  459056 sshutil.go:53] new ssh client: &{IP:192.168.50.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/old-k8s-version-089147/id_rsa Username:docker}
	I0510 19:27:28.237429  459056 ssh_runner.go:195] Run: cat /etc/os-release
	I0510 19:27:28.242504  459056 info.go:137] Remote host: Buildroot 2024.11.2
	I0510 19:27:28.242535  459056 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-388787/.minikube/addons for local assets ...
	I0510 19:27:28.242600  459056 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-388787/.minikube/files for local assets ...
	I0510 19:27:28.242694  459056 filesync.go:149] local asset: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem -> 3959802.pem in /etc/ssl/certs
	I0510 19:27:28.242795  459056 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0510 19:27:28.255581  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem --> /etc/ssl/certs/3959802.pem (1708 bytes)
	I0510 19:27:28.285383  459056 start.go:296] duration metric: took 139.572888ms for postStartSetup
	I0510 19:27:28.285430  459056 fix.go:56] duration metric: took 19.171545731s for fixHost
	I0510 19:27:28.285452  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:28.288861  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.289256  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:28.289288  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.289472  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:27:28.289747  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:28.289968  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:28.290122  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:27:28.290275  459056 main.go:141] libmachine: Using SSH client type: native
	I0510 19:27:28.290504  459056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.225 22 <nil> <nil>}
	I0510 19:27:28.290514  459056 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0510 19:27:28.400790  459056 main.go:141] libmachine: SSH cmd err, output: <nil>: 1746905248.354737003
	
	I0510 19:27:28.400820  459056 fix.go:216] guest clock: 1746905248.354737003
	I0510 19:27:28.400830  459056 fix.go:229] Guest: 2025-05-10 19:27:28.354737003 +0000 UTC Remote: 2025-05-10 19:27:28.285433906 +0000 UTC m=+19.332417949 (delta=69.303097ms)
	I0510 19:27:28.400874  459056 fix.go:200] guest clock delta is within tolerance: 69.303097ms
	I0510 19:27:28.400901  459056 start.go:83] releasing machines lock for "old-k8s-version-089147", held for 19.287012994s
	I0510 19:27:28.400943  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .DriverName
	I0510 19:27:28.401246  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetIP
	I0510 19:27:28.404469  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.404985  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:28.405012  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.405227  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .DriverName
	I0510 19:27:28.405870  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .DriverName
	I0510 19:27:28.406067  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .DriverName
	I0510 19:27:28.406182  459056 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0510 19:27:28.406225  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:28.406371  459056 ssh_runner.go:195] Run: cat /version.json
	I0510 19:27:28.406414  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:28.409133  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.409451  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.409485  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:28.409508  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.409700  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:27:28.409895  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:28.409939  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:28.409971  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.410074  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:27:28.410144  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:27:28.410238  459056 sshutil.go:53] new ssh client: &{IP:192.168.50.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/old-k8s-version-089147/id_rsa Username:docker}
	I0510 19:27:28.410313  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:28.410431  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:27:28.410556  459056 sshutil.go:53] new ssh client: &{IP:192.168.50.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/old-k8s-version-089147/id_rsa Username:docker}
	I0510 19:27:28.522881  459056 ssh_runner.go:195] Run: systemctl --version
	I0510 19:27:28.529679  459056 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0510 19:27:28.679208  459056 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0510 19:27:28.686449  459056 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0510 19:27:28.686542  459056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0510 19:27:28.706391  459056 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0510 19:27:28.706422  459056 start.go:495] detecting cgroup driver to use...
	I0510 19:27:28.706502  459056 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0510 19:27:28.725500  459056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0510 19:27:28.743141  459056 docker.go:225] disabling cri-docker service (if available) ...
	I0510 19:27:28.743218  459056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0510 19:27:28.763489  459056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0510 19:27:28.782362  459056 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0510 19:27:28.930849  459056 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0510 19:27:29.145684  459056 docker.go:241] disabling docker service ...
	I0510 19:27:29.145777  459056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0510 19:27:29.162572  459056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0510 19:27:29.177892  459056 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0510 19:27:29.337238  459056 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0510 19:27:29.498230  459056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0510 19:27:29.515221  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0510 19:27:29.539326  459056 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0510 19:27:29.539400  459056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:27:29.551931  459056 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0510 19:27:29.552027  459056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:27:29.563727  459056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:27:29.576495  459056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:27:29.589274  459056 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0510 19:27:29.602567  459056 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0510 19:27:29.613569  459056 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0510 19:27:29.613666  459056 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0510 19:27:29.631475  459056 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0510 19:27:29.646992  459056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 19:27:29.783415  459056 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0510 19:27:29.908799  459056 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0510 19:27:29.908871  459056 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0510 19:27:29.916611  459056 start.go:563] Will wait 60s for crictl version
	I0510 19:27:29.916678  459056 ssh_runner.go:195] Run: which crictl
	I0510 19:27:29.922342  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0510 19:27:29.970957  459056 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0510 19:27:29.971075  459056 ssh_runner.go:195] Run: crio --version
	I0510 19:27:30.013260  459056 ssh_runner.go:195] Run: crio --version
	I0510 19:27:30.045551  459056 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0510 19:27:29.772968  459268 main.go:141] libmachine: (embed-certs-483140) waiting for IP...
	I0510 19:27:29.773852  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:29.774282  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:29.774439  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:29.774308  459321 retry.go:31] will retry after 290.306519ms: waiting for domain to come up
	I0510 19:27:30.066100  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:30.066611  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:30.066646  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:30.066565  459321 retry.go:31] will retry after 275.607152ms: waiting for domain to come up
	I0510 19:27:30.344347  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:30.345208  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:30.345242  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:30.345116  459321 retry.go:31] will retry after 431.583413ms: waiting for domain to come up
	I0510 19:27:30.779076  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:30.779843  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:30.779882  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:30.779780  459321 retry.go:31] will retry after 472.118095ms: waiting for domain to come up
	I0510 19:27:31.253280  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:31.253935  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:31.253963  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:31.253906  459321 retry.go:31] will retry after 565.053718ms: waiting for domain to come up
	I0510 19:27:31.820497  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:31.821065  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:31.821097  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:31.821039  459321 retry.go:31] will retry after 714.111732ms: waiting for domain to come up
	I0510 19:27:32.536460  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:32.537050  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:32.537080  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:32.537000  459321 retry.go:31] will retry after 1.161843323s: waiting for domain to come up
	I0510 19:27:33.701019  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:33.701583  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:33.701613  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:33.701550  459321 retry.go:31] will retry after 996.121621ms: waiting for domain to come up
	I0510 19:27:30.046696  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetIP
	I0510 19:27:30.049916  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:30.050298  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:30.050343  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:30.050593  459056 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0510 19:27:30.055795  459056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 19:27:30.072862  459056 kubeadm.go:875] updating cluster {Name:old-k8s-version-089147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-089147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.225 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0510 19:27:30.073023  459056 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0510 19:27:30.073092  459056 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 19:27:30.136655  459056 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0510 19:27:30.136733  459056 ssh_runner.go:195] Run: which lz4
	I0510 19:27:30.141756  459056 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0510 19:27:30.146784  459056 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0510 19:27:30.146832  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0510 19:27:32.084982  459056 crio.go:462] duration metric: took 1.943253158s to copy over tarball
	I0510 19:27:32.085084  459056 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0510 19:27:34.700012  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:34.700655  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:34.700709  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:34.700617  459321 retry.go:31] will retry after 1.33170267s: waiting for domain to come up
	I0510 19:27:36.033761  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:36.034412  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:36.034447  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:36.034366  459321 retry.go:31] will retry after 2.129430607s: waiting for domain to come up
	I0510 19:27:38.166385  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:38.167048  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:38.167074  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:38.167010  459321 retry.go:31] will retry after 1.898585133s: waiting for domain to come up
	I0510 19:27:34.680248  459056 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.595132142s)
	I0510 19:27:34.680275  459056 crio.go:469] duration metric: took 2.595258666s to extract the tarball
	I0510 19:27:34.680284  459056 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0510 19:27:34.725856  459056 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 19:27:34.769530  459056 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0510 19:27:34.769567  459056 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0510 19:27:34.769639  459056 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0510 19:27:34.769682  459056 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:27:34.769696  459056 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:27:34.769712  459056 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0510 19:27:34.769686  459056 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0510 19:27:34.769766  459056 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:27:34.769779  459056 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0510 19:27:34.769798  459056 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:27:34.771393  459056 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:27:34.771413  459056 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0510 19:27:34.771433  459056 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0510 19:27:34.771391  459056 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:27:34.771454  459056 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:27:34.771457  459056 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:27:34.771488  459056 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0510 19:27:34.771522  459056 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0510 19:27:34.903898  459056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:27:34.909532  459056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0510 19:27:34.909958  459056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:27:34.920714  459056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:27:34.927038  459056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0510 19:27:34.932543  459056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0510 19:27:34.939391  459056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:27:35.035164  459056 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0510 19:27:35.035225  459056 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:27:35.035308  459056 ssh_runner.go:195] Run: which crictl
	I0510 19:27:35.046705  459056 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0510 19:27:35.046773  459056 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0510 19:27:35.046831  459056 ssh_runner.go:195] Run: which crictl
	I0510 19:27:35.102600  459056 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0510 19:27:35.102657  459056 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:27:35.102728  459056 ssh_runner.go:195] Run: which crictl
	I0510 19:27:35.114127  459056 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0510 19:27:35.114197  459056 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:27:35.114220  459056 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0510 19:27:35.114255  459056 ssh_runner.go:195] Run: which crictl
	I0510 19:27:35.114262  459056 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0510 19:27:35.114305  459056 ssh_runner.go:195] Run: which crictl
	I0510 19:27:35.114526  459056 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0510 19:27:35.114562  459056 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0510 19:27:35.114596  459056 ssh_runner.go:195] Run: which crictl
	I0510 19:27:35.135454  459056 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0510 19:27:35.135500  459056 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:27:35.135549  459056 ssh_runner.go:195] Run: which crictl
	I0510 19:27:35.135570  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:27:35.135627  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0510 19:27:35.135673  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:27:35.135728  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0510 19:27:35.135753  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:27:35.135782  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0510 19:27:35.246929  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:27:35.246999  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0510 19:27:35.304129  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:27:35.304183  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:27:35.304193  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:27:35.304231  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0510 19:27:35.304278  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0510 19:27:35.381894  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0510 19:27:35.381939  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:27:35.482712  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0510 19:27:35.482788  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:27:35.482823  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:27:35.482858  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0510 19:27:35.482947  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:27:35.526146  459056 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0510 19:27:35.557215  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:27:35.649079  459056 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0510 19:27:35.649160  459056 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0510 19:27:35.649222  459056 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0510 19:27:35.649256  459056 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0510 19:27:35.649351  459056 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0510 19:27:35.667931  459056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0510 19:27:35.671336  459056 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0510 19:27:35.818843  459056 cache_images.go:92] duration metric: took 1.049254698s to LoadCachedImages
	W0510 19:27:35.818925  459056 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0510 19:27:35.818936  459056 kubeadm.go:926] updating node { 192.168.50.225 8443 v1.20.0 crio true true} ...
	I0510 19:27:35.819071  459056 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-089147 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-089147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0510 19:27:35.819178  459056 ssh_runner.go:195] Run: crio config
	I0510 19:27:35.871053  459056 cni.go:84] Creating CNI manager for ""
	I0510 19:27:35.871078  459056 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0510 19:27:35.871088  459056 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0510 19:27:35.871108  459056 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.225 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-089147 NodeName:old-k8s-version-089147 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.225"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.225 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0510 19:27:35.871325  459056 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.225
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-089147"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.225
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.225"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0510 19:27:35.871410  459056 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0510 19:27:35.884778  459056 binaries.go:44] Found k8s binaries, skipping transfer
	I0510 19:27:35.884850  459056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0510 19:27:35.897755  459056 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0510 19:27:35.920392  459056 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0510 19:27:35.944066  459056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0510 19:27:35.969513  459056 ssh_runner.go:195] Run: grep 192.168.50.225	control-plane.minikube.internal$ /etc/hosts
	I0510 19:27:35.973968  459056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.225	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 19:27:35.989113  459056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 19:27:36.126144  459056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 19:27:36.161368  459056 certs.go:68] Setting up /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147 for IP: 192.168.50.225
	I0510 19:27:36.161393  459056 certs.go:194] generating shared ca certs ...
	I0510 19:27:36.161414  459056 certs.go:226] acquiring lock for ca certs: {Name:mk8db74782205da4ac57ef815dd495cda255251a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:27:36.161602  459056 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20720-388787/.minikube/ca.key
	I0510 19:27:36.161660  459056 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.key
	I0510 19:27:36.161675  459056 certs.go:256] generating profile certs ...
	I0510 19:27:36.161815  459056 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/client.key
	I0510 19:27:36.161897  459056 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/apiserver.key.3362ca92
	I0510 19:27:36.161951  459056 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/proxy-client.key
	I0510 19:27:36.162093  459056 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980.pem (1338 bytes)
	W0510 19:27:36.162134  459056 certs.go:480] ignoring /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980_empty.pem, impossibly tiny 0 bytes
	I0510 19:27:36.162148  459056 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem (1679 bytes)
	I0510 19:27:36.162186  459056 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem (1078 bytes)
	I0510 19:27:36.162219  459056 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem (1123 bytes)
	I0510 19:27:36.162251  459056 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem (1675 bytes)
	I0510 19:27:36.162305  459056 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem (1708 bytes)
	I0510 19:27:36.163029  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0510 19:27:36.207434  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0510 19:27:36.254337  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0510 19:27:36.302029  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0510 19:27:36.340123  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0510 19:27:36.372457  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0510 19:27:36.417695  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0510 19:27:36.454687  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0510 19:27:36.491453  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0510 19:27:36.527708  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980.pem --> /usr/share/ca-certificates/395980.pem (1338 bytes)
	I0510 19:27:36.566188  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem --> /usr/share/ca-certificates/3959802.pem (1708 bytes)
	I0510 19:27:36.605695  459056 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0510 19:27:36.633416  459056 ssh_runner.go:195] Run: openssl version
	I0510 19:27:36.640812  459056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0510 19:27:36.655287  459056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0510 19:27:36.660996  459056 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 10 17:52 /usr/share/ca-certificates/minikubeCA.pem
	I0510 19:27:36.661078  459056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0510 19:27:36.671509  459056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0510 19:27:36.685341  459056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/395980.pem && ln -fs /usr/share/ca-certificates/395980.pem /etc/ssl/certs/395980.pem"
	I0510 19:27:36.701195  459056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/395980.pem
	I0510 19:27:36.707338  459056 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 10 18:00 /usr/share/ca-certificates/395980.pem
	I0510 19:27:36.707426  459056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/395980.pem
	I0510 19:27:36.715832  459056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/395980.pem /etc/ssl/certs/51391683.0"
	I0510 19:27:36.730499  459056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3959802.pem && ln -fs /usr/share/ca-certificates/3959802.pem /etc/ssl/certs/3959802.pem"
	I0510 19:27:36.745937  459056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3959802.pem
	I0510 19:27:36.753124  459056 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 10 18:00 /usr/share/ca-certificates/3959802.pem
	I0510 19:27:36.753219  459056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3959802.pem
	I0510 19:27:36.763162  459056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3959802.pem /etc/ssl/certs/3ec20f2e.0"
	I0510 19:27:36.777980  459056 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0510 19:27:36.784377  459056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0510 19:27:36.792871  459056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0510 19:27:36.801028  459056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0510 19:27:36.809570  459056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0510 19:27:36.820430  459056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0510 19:27:36.830234  459056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0510 19:27:36.838492  459056 kubeadm.go:392] StartCluster: {Name:old-k8s-version-089147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-089147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.225 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 19:27:36.838628  459056 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0510 19:27:36.838710  459056 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0510 19:27:36.883637  459056 cri.go:89] found id: ""
	I0510 19:27:36.883721  459056 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0510 19:27:36.898381  459056 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0510 19:27:36.898418  459056 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0510 19:27:36.898479  459056 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0510 19:27:36.911968  459056 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0510 19:27:36.912423  459056 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-089147" does not appear in /home/jenkins/minikube-integration/20720-388787/kubeconfig
	I0510 19:27:36.912622  459056 kubeconfig.go:62] /home/jenkins/minikube-integration/20720-388787/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-089147" cluster setting kubeconfig missing "old-k8s-version-089147" context setting]
	I0510 19:27:36.912933  459056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/kubeconfig: {Name:mk5ad7285fe4c17b2779ea6d5a539f101fe94797 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:27:36.978461  459056 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0510 19:27:36.992010  459056 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.50.225
	I0510 19:27:36.992058  459056 kubeadm.go:1152] stopping kube-system containers ...
	I0510 19:27:36.992090  459056 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0510 19:27:36.992157  459056 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0510 19:27:37.036332  459056 cri.go:89] found id: ""
	I0510 19:27:37.036417  459056 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0510 19:27:37.061304  459056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0510 19:27:37.077360  459056 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0510 19:27:37.077388  459056 kubeadm.go:157] found existing configuration files:
	
	I0510 19:27:37.077447  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0510 19:27:37.091136  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0510 19:27:37.091207  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0510 19:27:37.108190  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0510 19:27:37.122863  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0510 19:27:37.122925  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0510 19:27:37.135581  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0510 19:27:37.151096  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0510 19:27:37.151176  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0510 19:27:37.163976  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0510 19:27:37.176297  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0510 19:27:37.176382  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0510 19:27:37.189484  459056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0510 19:27:37.202907  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:27:37.370636  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:27:38.101468  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:27:38.357025  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:27:38.472109  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:27:38.566036  459056 api_server.go:52] waiting for apiserver process to appear ...
	I0510 19:27:38.566163  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:40.067566  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:40.068079  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:40.068151  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:40.068067  459321 retry.go:31] will retry after 3.236923309s: waiting for domain to come up
	I0510 19:27:43.308549  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:43.309080  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:43.309112  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:43.309038  459321 retry.go:31] will retry after 2.981327362s: waiting for domain to come up
	I0510 19:27:39.066944  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:39.566854  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:40.067066  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:40.567198  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:41.066452  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:41.566381  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:42.066951  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:42.567170  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:43.067308  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:43.566541  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:46.293587  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:46.294125  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:46.294169  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:46.294106  459321 retry.go:31] will retry after 3.49595936s: waiting for domain to come up
	I0510 19:27:44.067005  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:44.566869  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:45.066432  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:45.567107  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:46.066205  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:46.566600  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:47.066806  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:47.567316  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:48.067123  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:48.566636  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:49.792274  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:49.792796  459268 main.go:141] libmachine: (embed-certs-483140) found domain IP: 192.168.72.231
	I0510 19:27:49.792820  459268 main.go:141] libmachine: (embed-certs-483140) reserving static IP address...
	I0510 19:27:49.792830  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has current primary IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:49.793260  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "embed-certs-483140", mac: "52:54:00:2c:f8:9f", ip: "192.168.72.231"} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:49.793283  459268 main.go:141] libmachine: (embed-certs-483140) reserved static IP address 192.168.72.231 for domain embed-certs-483140
	I0510 19:27:49.793301  459268 main.go:141] libmachine: (embed-certs-483140) DBG | skip adding static IP to network mk-embed-certs-483140 - found existing host DHCP lease matching {name: "embed-certs-483140", mac: "52:54:00:2c:f8:9f", ip: "192.168.72.231"}
	I0510 19:27:49.793315  459268 main.go:141] libmachine: (embed-certs-483140) DBG | Getting to WaitForSSH function...
	I0510 19:27:49.793330  459268 main.go:141] libmachine: (embed-certs-483140) waiting for SSH...
	I0510 19:27:49.795680  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:49.796092  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:49.796115  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:49.796237  459268 main.go:141] libmachine: (embed-certs-483140) DBG | Using SSH client type: external
	I0510 19:27:49.796292  459268 main.go:141] libmachine: (embed-certs-483140) DBG | Using SSH private key: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/embed-certs-483140/id_rsa (-rw-------)
	I0510 19:27:49.796323  459268 main.go:141] libmachine: (embed-certs-483140) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.231 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20720-388787/.minikube/machines/embed-certs-483140/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0510 19:27:49.796357  459268 main.go:141] libmachine: (embed-certs-483140) DBG | About to run SSH command:
	I0510 19:27:49.796369  459268 main.go:141] libmachine: (embed-certs-483140) DBG | exit 0
	I0510 19:27:49.923834  459268 main.go:141] libmachine: (embed-certs-483140) DBG | SSH cmd err, output: <nil>: 
	I0510 19:27:49.924265  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetConfigRaw
	I0510 19:27:49.924904  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetIP
	I0510 19:27:49.928115  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:49.928557  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:49.928589  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:49.928844  459268 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/embed-certs-483140/config.json ...
	I0510 19:27:49.929086  459268 machine.go:93] provisionDockerMachine start ...
	I0510 19:27:49.929120  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	I0510 19:27:49.929435  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:27:49.931867  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:49.932242  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:49.932278  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:49.932387  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHPort
	I0510 19:27:49.932602  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:49.932748  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:49.932878  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHUsername
	I0510 19:27:49.933115  459268 main.go:141] libmachine: Using SSH client type: native
	I0510 19:27:49.933388  459268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.72.231 22 <nil> <nil>}
	I0510 19:27:49.933401  459268 main.go:141] libmachine: About to run SSH command:
	hostname
	I0510 19:27:50.044168  459268 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0510 19:27:50.044204  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetMachineName
	I0510 19:27:50.044481  459268 buildroot.go:166] provisioning hostname "embed-certs-483140"
	I0510 19:27:50.044509  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetMachineName
	I0510 19:27:50.044693  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:27:50.047840  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:50.048210  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:50.048232  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:50.048417  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHPort
	I0510 19:27:50.048632  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:50.048790  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:50.048942  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHUsername
	I0510 19:27:50.049085  459268 main.go:141] libmachine: Using SSH client type: native
	I0510 19:27:50.049295  459268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.72.231 22 <nil> <nil>}
	I0510 19:27:50.049308  459268 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-483140 && echo "embed-certs-483140" | sudo tee /etc/hostname
	I0510 19:27:50.174048  459268 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-483140
	
	I0510 19:27:50.174083  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:27:50.177045  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:50.177447  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:50.177480  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:50.177653  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHPort
	I0510 19:27:50.177869  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:50.178002  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:50.178154  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHUsername
	I0510 19:27:50.178284  459268 main.go:141] libmachine: Using SSH client type: native
	I0510 19:27:50.178498  459268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.72.231 22 <nil> <nil>}
	I0510 19:27:50.178514  459268 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-483140' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-483140/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-483140' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0510 19:27:50.298589  459268 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0510 19:27:50.298629  459268 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20720-388787/.minikube CaCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20720-388787/.minikube}
	I0510 19:27:50.298678  459268 buildroot.go:174] setting up certificates
	I0510 19:27:50.298688  459268 provision.go:84] configureAuth start
	I0510 19:27:50.298698  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetMachineName
	I0510 19:27:50.299119  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetIP
	I0510 19:27:50.301907  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:50.302237  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:50.302256  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:50.302394  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:27:50.305191  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:50.305523  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:50.305545  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:50.305718  459268 provision.go:143] copyHostCerts
	I0510 19:27:50.305792  459268 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem, removing ...
	I0510 19:27:50.305807  459268 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem
	I0510 19:27:50.305860  459268 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem (1078 bytes)
	I0510 19:27:50.305962  459268 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem, removing ...
	I0510 19:27:50.305970  459268 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem
	I0510 19:27:50.306000  459268 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem (1123 bytes)
	I0510 19:27:50.306073  459268 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem, removing ...
	I0510 19:27:50.306087  459268 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem
	I0510 19:27:50.306105  459268 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem (1675 bytes)
	I0510 19:27:50.306169  459268 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem org=jenkins.embed-certs-483140 san=[127.0.0.1 192.168.72.231 embed-certs-483140 localhost minikube]
	I0510 19:27:50.615586  459268 provision.go:177] copyRemoteCerts
	I0510 19:27:50.615663  459268 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0510 19:27:50.615691  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:27:50.618693  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:50.619094  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:50.619124  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:50.619296  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHPort
	I0510 19:27:50.619467  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:50.619613  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHUsername
	I0510 19:27:50.619728  459268 sshutil.go:53] new ssh client: &{IP:192.168.72.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/embed-certs-483140/id_rsa Username:docker}
	I0510 19:27:50.709319  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0510 19:27:50.739864  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0510 19:27:50.769743  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0510 19:27:50.799032  459268 provision.go:87] duration metric: took 500.330996ms to configureAuth
	I0510 19:27:50.799064  459268 buildroot.go:189] setting minikube options for container-runtime
	I0510 19:27:50.799354  459268 config.go:182] Loaded profile config "embed-certs-483140": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 19:27:50.799434  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:27:50.802338  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:50.802753  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:50.802796  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:50.802915  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHPort
	I0510 19:27:50.803096  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:50.803296  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:50.803423  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHUsername
	I0510 19:27:50.803591  459268 main.go:141] libmachine: Using SSH client type: native
	I0510 19:27:50.803807  459268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.72.231 22 <nil> <nil>}
	I0510 19:27:50.803830  459268 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0510 19:27:51.055936  459268 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0510 19:27:51.055969  459268 machine.go:96] duration metric: took 1.126866865s to provisionDockerMachine
	I0510 19:27:51.055989  459268 start.go:293] postStartSetup for "embed-certs-483140" (driver="kvm2")
	I0510 19:27:51.056002  459268 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0510 19:27:51.056026  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	I0510 19:27:51.056453  459268 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0510 19:27:51.056494  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:27:51.059782  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:51.060458  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:51.060503  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:51.060671  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHPort
	I0510 19:27:51.061017  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:51.061277  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHUsername
	I0510 19:27:51.061481  459268 sshutil.go:53] new ssh client: &{IP:192.168.72.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/embed-certs-483140/id_rsa Username:docker}
	I0510 19:27:51.153337  459268 ssh_runner.go:195] Run: cat /etc/os-release
	I0510 19:27:51.158738  459268 info.go:137] Remote host: Buildroot 2024.11.2
	I0510 19:27:51.158782  459268 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-388787/.minikube/addons for local assets ...
	I0510 19:27:51.158876  459268 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-388787/.minikube/files for local assets ...
	I0510 19:27:51.158982  459268 filesync.go:149] local asset: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem -> 3959802.pem in /etc/ssl/certs
	I0510 19:27:51.159078  459268 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0510 19:27:51.171765  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem --> /etc/ssl/certs/3959802.pem (1708 bytes)
	I0510 19:27:51.204973  459268 start.go:296] duration metric: took 148.937348ms for postStartSetup
	I0510 19:27:51.205024  459268 fix.go:56] duration metric: took 22.803970548s for fixHost
	I0510 19:27:51.205051  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:27:51.208258  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:51.208723  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:51.208748  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:51.208995  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHPort
	I0510 19:27:51.209219  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:51.209421  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:51.209566  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHUsername
	I0510 19:27:51.209735  459268 main.go:141] libmachine: Using SSH client type: native
	I0510 19:27:51.209940  459268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.72.231 22 <nil> <nil>}
	I0510 19:27:51.209947  459268 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0510 19:27:51.320755  459268 main.go:141] libmachine: SSH cmd err, output: <nil>: 1746905271.291613089
	
	I0510 19:27:51.320787  459268 fix.go:216] guest clock: 1746905271.291613089
	I0510 19:27:51.320798  459268 fix.go:229] Guest: 2025-05-10 19:27:51.291613089 +0000 UTC Remote: 2025-05-10 19:27:51.20502902 +0000 UTC m=+27.360293338 (delta=86.584069ms)
	I0510 19:27:51.320828  459268 fix.go:200] guest clock delta is within tolerance: 86.584069ms
	I0510 19:27:51.320835  459268 start.go:83] releasing machines lock for "embed-certs-483140", held for 22.919808938s
	I0510 19:27:51.320863  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	I0510 19:27:51.321154  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetIP
	I0510 19:27:51.324081  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:51.324459  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:51.324483  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:51.324692  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	I0510 19:27:51.325214  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	I0510 19:27:51.325408  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	I0510 19:27:51.325548  459268 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0510 19:27:51.325594  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:27:51.325646  459268 ssh_runner.go:195] Run: cat /version.json
	I0510 19:27:51.325681  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:27:51.328440  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:51.328753  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:51.328794  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:51.328818  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:51.329002  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHPort
	I0510 19:27:51.329194  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:51.329232  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:51.329255  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:51.329376  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHUsername
	I0510 19:27:51.329402  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHPort
	I0510 19:27:51.329568  459268 sshutil.go:53] new ssh client: &{IP:192.168.72.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/embed-certs-483140/id_rsa Username:docker}
	I0510 19:27:51.329584  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:51.329733  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHUsername
	I0510 19:27:51.329873  459268 sshutil.go:53] new ssh client: &{IP:192.168.72.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/embed-certs-483140/id_rsa Username:docker}
	I0510 19:27:51.446190  459268 ssh_runner.go:195] Run: systemctl --version
	I0510 19:27:51.452760  459268 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0510 19:27:51.607666  459268 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0510 19:27:51.616239  459268 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0510 19:27:51.616317  459268 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0510 19:27:51.636571  459268 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0510 19:27:51.636605  459268 start.go:495] detecting cgroup driver to use...
	I0510 19:27:51.636667  459268 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0510 19:27:51.657444  459268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0510 19:27:51.676562  459268 docker.go:225] disabling cri-docker service (if available) ...
	I0510 19:27:51.676630  459268 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0510 19:27:51.694731  459268 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0510 19:27:51.712216  459268 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0510 19:27:51.876386  459268 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0510 19:27:52.020882  459268 docker.go:241] disabling docker service ...
	I0510 19:27:52.020959  459268 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0510 19:27:52.037031  459268 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0510 19:27:52.051939  459268 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0510 19:27:52.242011  459268 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0510 19:27:52.396595  459268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0510 19:27:52.412573  459268 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0510 19:27:52.436314  459268 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0510 19:27:52.436382  459268 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:27:52.448707  459268 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0510 19:27:52.448775  459268 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:27:52.460614  459268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:27:52.472822  459268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:27:52.484913  459268 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0510 19:27:52.497971  459268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:27:52.511526  459268 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:27:52.533115  459268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:27:52.545947  459268 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0510 19:27:52.556778  459268 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0510 19:27:52.556857  459268 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0510 19:27:52.573550  459268 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0510 19:27:52.589299  459268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 19:27:52.732786  459268 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0510 19:27:52.860039  459268 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0510 19:27:52.860135  459268 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0510 19:27:52.865273  459268 start.go:563] Will wait 60s for crictl version
	I0510 19:27:52.865329  459268 ssh_runner.go:195] Run: which crictl
	I0510 19:27:52.869469  459268 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0510 19:27:52.910450  459268 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0510 19:27:52.910548  459268 ssh_runner.go:195] Run: crio --version
	I0510 19:27:52.940082  459268 ssh_runner.go:195] Run: crio --version
	I0510 19:27:52.972063  459268 out.go:177] * Preparing Kubernetes v1.33.0 on CRI-O 1.29.1 ...
	I0510 19:27:52.973307  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetIP
	I0510 19:27:52.976415  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:52.976789  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:52.976816  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:52.977066  459268 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0510 19:27:52.981433  459268 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 19:27:52.995881  459268 kubeadm.go:875] updating cluster {Name:embed-certs-483140 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.33.0 ClusterName:embed-certs-483140 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.231 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNode
Requested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0510 19:27:52.995991  459268 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
	I0510 19:27:52.996030  459268 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 19:27:53.034258  459268 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.33.0". assuming images are not preloaded.
	I0510 19:27:53.034325  459268 ssh_runner.go:195] Run: which lz4
	I0510 19:27:53.038628  459268 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0510 19:27:53.043283  459268 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0510 19:27:53.043322  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (413217622 bytes)
	I0510 19:27:49.067037  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:49.566942  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:50.066669  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:50.566620  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:51.066533  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:51.567303  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:52.066558  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:52.567193  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:53.066234  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:53.567160  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:54.704270  459268 crio.go:462] duration metric: took 1.665684843s to copy over tarball
	I0510 19:27:54.704390  459268 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0510 19:27:56.898604  459268 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.19418195s)
	I0510 19:27:56.898641  459268 crio.go:469] duration metric: took 2.194331535s to extract the tarball
	I0510 19:27:56.898653  459268 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0510 19:27:56.939194  459268 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 19:27:56.988274  459268 crio.go:514] all images are preloaded for cri-o runtime.
	I0510 19:27:56.988305  459268 cache_images.go:84] Images are preloaded, skipping loading
	I0510 19:27:56.988315  459268 kubeadm.go:926] updating node { 192.168.72.231 8443 v1.33.0 crio true true} ...
	I0510 19:27:56.988421  459268 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-483140 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.231
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.0 ClusterName:embed-certs-483140 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0510 19:27:56.988518  459268 ssh_runner.go:195] Run: crio config
	I0510 19:27:57.044585  459268 cni.go:84] Creating CNI manager for ""
	I0510 19:27:57.044616  459268 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0510 19:27:57.044632  459268 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0510 19:27:57.044674  459268 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.231 APIServerPort:8443 KubernetesVersion:v1.33.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-483140 NodeName:embed-certs-483140 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.231"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.231 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0510 19:27:57.044833  459268 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.231
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-483140"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.231"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.231"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0510 19:27:57.044929  459268 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.0
	I0510 19:27:57.057883  459268 binaries.go:44] Found k8s binaries, skipping transfer
	I0510 19:27:57.057964  459268 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0510 19:27:57.070669  459268 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0510 19:27:57.096191  459268 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0510 19:27:57.120219  459268 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I0510 19:27:57.143282  459268 ssh_runner.go:195] Run: grep 192.168.72.231	control-plane.minikube.internal$ /etc/hosts
	I0510 19:27:57.148049  459268 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.231	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 19:27:57.164188  459268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 19:27:57.307271  459268 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 19:27:57.342355  459268 certs.go:68] Setting up /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/embed-certs-483140 for IP: 192.168.72.231
	I0510 19:27:57.342381  459268 certs.go:194] generating shared ca certs ...
	I0510 19:27:57.342405  459268 certs.go:226] acquiring lock for ca certs: {Name:mk8db74782205da4ac57ef815dd495cda255251a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:27:57.342591  459268 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20720-388787/.minikube/ca.key
	I0510 19:27:57.342680  459268 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.key
	I0510 19:27:57.342697  459268 certs.go:256] generating profile certs ...
	I0510 19:27:57.342827  459268 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/embed-certs-483140/client.key
	I0510 19:27:57.342886  459268 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/embed-certs-483140/apiserver.key.027a75a8
	I0510 19:27:57.342922  459268 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/embed-certs-483140/proxy-client.key
	I0510 19:27:57.343035  459268 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980.pem (1338 bytes)
	W0510 19:27:57.343078  459268 certs.go:480] ignoring /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980_empty.pem, impossibly tiny 0 bytes
	I0510 19:27:57.343092  459268 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem (1679 bytes)
	I0510 19:27:57.343124  459268 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem (1078 bytes)
	I0510 19:27:57.343154  459268 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem (1123 bytes)
	I0510 19:27:57.343196  459268 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem (1675 bytes)
	I0510 19:27:57.343281  459268 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem (1708 bytes)
	I0510 19:27:57.343973  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0510 19:27:57.378887  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0510 19:27:57.420451  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0510 19:27:57.457206  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0510 19:27:57.499641  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/embed-certs-483140/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0510 19:27:57.534055  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/embed-certs-483140/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0510 19:27:57.564979  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/embed-certs-483140/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0510 19:27:57.601743  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/embed-certs-483140/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0510 19:27:57.633117  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980.pem --> /usr/share/ca-certificates/395980.pem (1338 bytes)
	I0510 19:27:57.664410  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem --> /usr/share/ca-certificates/3959802.pem (1708 bytes)
	I0510 19:27:57.693525  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0510 19:27:57.723750  459268 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0510 19:27:57.745486  459268 ssh_runner.go:195] Run: openssl version
	I0510 19:27:57.752288  459268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/395980.pem && ln -fs /usr/share/ca-certificates/395980.pem /etc/ssl/certs/395980.pem"
	I0510 19:27:57.766087  459268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/395980.pem
	I0510 19:27:57.771459  459268 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 10 18:00 /usr/share/ca-certificates/395980.pem
	I0510 19:27:57.771521  459268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/395980.pem
	I0510 19:27:57.778642  459268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/395980.pem /etc/ssl/certs/51391683.0"
	I0510 19:27:57.792251  459268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3959802.pem && ln -fs /usr/share/ca-certificates/3959802.pem /etc/ssl/certs/3959802.pem"
	I0510 19:27:57.806097  459268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3959802.pem
	I0510 19:27:57.811543  459268 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 10 18:00 /usr/share/ca-certificates/3959802.pem
	I0510 19:27:57.811613  459268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3959802.pem
	I0510 19:27:57.818894  459268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3959802.pem /etc/ssl/certs/3ec20f2e.0"
	I0510 19:27:57.833637  459268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0510 19:27:57.848084  459268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0510 19:27:57.853506  459268 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 10 17:52 /usr/share/ca-certificates/minikubeCA.pem
	I0510 19:27:57.853569  459268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0510 19:27:57.861284  459268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0510 19:27:57.875248  459268 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0510 19:27:57.881000  459268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0510 19:27:57.889239  459268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0510 19:27:57.898408  459268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0510 19:27:57.907154  459268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0510 19:27:57.915654  459268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0510 19:27:57.924501  459268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0510 19:27:57.932344  459268 kubeadm.go:392] StartCluster: {Name:embed-certs-483140 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33
.0 ClusterName:embed-certs-483140 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.231 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 19:27:57.932450  459268 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0510 19:27:57.932515  459268 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0510 19:27:57.977038  459268 cri.go:89] found id: ""
	I0510 19:27:57.977121  459268 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0510 19:27:57.988821  459268 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0510 19:27:57.988856  459268 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0510 19:27:57.988917  459268 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0510 19:27:58.000862  459268 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0510 19:27:58.001626  459268 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-483140" does not appear in /home/jenkins/minikube-integration/20720-388787/kubeconfig
	I0510 19:27:58.001911  459268 kubeconfig.go:62] /home/jenkins/minikube-integration/20720-388787/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-483140" cluster setting kubeconfig missing "embed-certs-483140" context setting]
	I0510 19:27:58.002463  459268 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/kubeconfig: {Name:mk5ad7285fe4c17b2779ea6d5a539f101fe94797 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:27:58.012994  459268 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0510 19:27:58.026138  459268 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.72.231
	I0510 19:27:58.026178  459268 kubeadm.go:1152] stopping kube-system containers ...
	I0510 19:27:58.026192  459268 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0510 19:27:58.026251  459268 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0510 19:27:58.069294  459268 cri.go:89] found id: ""
	I0510 19:27:58.069376  459268 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0510 19:27:58.089295  459268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0510 19:27:58.101786  459268 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0510 19:27:58.101807  459268 kubeadm.go:157] found existing configuration files:
	
	I0510 19:27:58.101851  459268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0510 19:27:58.112987  459268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0510 19:27:58.113053  459268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0510 19:27:58.125239  459268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0510 19:27:58.137764  459268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0510 19:27:58.137828  459268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0510 19:27:58.150429  459268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0510 19:27:58.163051  459268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0510 19:27:58.163137  459268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0510 19:27:58.175159  459268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0510 19:27:58.186717  459268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0510 19:27:58.186792  459268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0510 19:27:58.200405  459268 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0510 19:27:58.214273  459268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:27:58.343615  459268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:27:54.066832  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:54.567225  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:55.067095  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:55.567141  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:56.066981  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:56.566711  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:57.066205  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:57.566404  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:58.067102  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:58.566428  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:59.367696  459268 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.024040496s)
	I0510 19:27:59.367731  459268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:27:59.640666  459268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:27:59.716214  459268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:27:59.797846  459268 api_server.go:52] waiting for apiserver process to appear ...
	I0510 19:27:59.797921  459268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:00.298404  459268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:00.798112  459268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:00.834727  459268 api_server.go:72] duration metric: took 1.036892245s to wait for apiserver process to appear ...
	I0510 19:28:00.834760  459268 api_server.go:88] waiting for apiserver healthz status ...
	I0510 19:28:00.834784  459268 api_server.go:253] Checking apiserver healthz at https://192.168.72.231:8443/healthz ...
	I0510 19:28:00.835339  459268 api_server.go:269] stopped: https://192.168.72.231:8443/healthz: Get "https://192.168.72.231:8443/healthz": dial tcp 192.168.72.231:8443: connect: connection refused
	I0510 19:28:01.334998  459268 api_server.go:253] Checking apiserver healthz at https://192.168.72.231:8443/healthz ...
	I0510 19:27:59.066475  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:59.567069  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:00.066988  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:00.566888  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:01.066769  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:01.566741  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:02.066555  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:02.566338  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:03.066492  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:03.567302  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:03.904035  459268 api_server.go:279] https://192.168.72.231:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0510 19:28:03.904079  459268 api_server.go:103] status: https://192.168.72.231:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0510 19:28:03.904097  459268 api_server.go:253] Checking apiserver healthz at https://192.168.72.231:8443/healthz ...
	I0510 19:28:03.956072  459268 api_server.go:279] https://192.168.72.231:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0510 19:28:03.956108  459268 api_server.go:103] status: https://192.168.72.231:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0510 19:28:04.335740  459268 api_server.go:253] Checking apiserver healthz at https://192.168.72.231:8443/healthz ...
	I0510 19:28:04.341381  459268 api_server.go:279] https://192.168.72.231:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0510 19:28:04.341410  459268 api_server.go:103] status: https://192.168.72.231:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0510 19:28:04.835035  459268 api_server.go:253] Checking apiserver healthz at https://192.168.72.231:8443/healthz ...
	I0510 19:28:04.843795  459268 api_server.go:279] https://192.168.72.231:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0510 19:28:04.843856  459268 api_server.go:103] status: https://192.168.72.231:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0510 19:28:05.335582  459268 api_server.go:253] Checking apiserver healthz at https://192.168.72.231:8443/healthz ...
	I0510 19:28:05.340256  459268 api_server.go:279] https://192.168.72.231:8443/healthz returned 200:
	ok
	I0510 19:28:05.348062  459268 api_server.go:141] control plane version: v1.33.0
	I0510 19:28:05.348092  459268 api_server.go:131] duration metric: took 4.513324632s to wait for apiserver health ...
	I0510 19:28:05.348102  459268 cni.go:84] Creating CNI manager for ""
	I0510 19:28:05.348108  459268 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0510 19:28:05.349901  459268 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0510 19:28:05.351199  459268 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0510 19:28:05.369532  459268 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0510 19:28:05.403896  459268 system_pods.go:43] waiting for kube-system pods to appear ...
	I0510 19:28:05.410420  459268 system_pods.go:59] 8 kube-system pods found
	I0510 19:28:05.410466  459268 system_pods.go:61] "coredns-674b8bbfcf-4ld9c" [2af71141-c2b9-4788-8dcf-19ae78077d83] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0510 19:28:05.410476  459268 system_pods.go:61] "etcd-embed-certs-483140" [18335556-d523-4f93-9975-36c6ec710b8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0510 19:28:05.410484  459268 system_pods.go:61] "kube-apiserver-embed-certs-483140" [ccfb56df-98d8-49bd-af84-4897349b90fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0510 19:28:05.410489  459268 system_pods.go:61] "kube-controller-manager-embed-certs-483140" [3aa74b28-d50d-4a50-b222-38dea567ed3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0510 19:28:05.410494  459268 system_pods.go:61] "kube-proxy-b2gvg" [d17e7a7f-57d3-4fe4-ace9-7a2fc70bb585] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0510 19:28:05.410500  459268 system_pods.go:61] "kube-scheduler-embed-certs-483140" [1eb4348b-46a3-45d6-bd78-d5d9045b600c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0510 19:28:05.410505  459268 system_pods.go:61] "metrics-server-f79f97bbb-dbl7q" [b17e1431-b05d-4d16-8f92-46b9526e09fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0510 19:28:05.410510  459268 system_pods.go:61] "storage-provisioner" [e9b8f9e8-8add-47f3-a9a7-51fae3a958d5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0510 19:28:05.410519  459268 system_pods.go:74] duration metric: took 6.592608ms to wait for pod list to return data ...
	I0510 19:28:05.410530  459268 node_conditions.go:102] verifying NodePressure condition ...
	I0510 19:28:05.415787  459268 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0510 19:28:05.415827  459268 node_conditions.go:123] node cpu capacity is 2
	I0510 19:28:05.415843  459268 node_conditions.go:105] duration metric: took 5.307579ms to run NodePressure ...
	I0510 19:28:05.415868  459268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:28:05.791590  459268 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0510 19:28:05.795260  459268 kubeadm.go:735] kubelet initialised
	I0510 19:28:05.795284  459268 kubeadm.go:736] duration metric: took 3.665992ms waiting for restarted kubelet to initialise ...
	I0510 19:28:05.795305  459268 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0510 19:28:05.811911  459268 ops.go:34] apiserver oom_adj: -16
	I0510 19:28:05.811945  459268 kubeadm.go:593] duration metric: took 7.823080185s to restartPrimaryControlPlane
	I0510 19:28:05.811959  459268 kubeadm.go:394] duration metric: took 7.879628572s to StartCluster
	I0510 19:28:05.811982  459268 settings.go:142] acquiring lock: {Name:mk4ab6a112c947bfdedd8044017a7c560266fb5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:28:05.812070  459268 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20720-388787/kubeconfig
	I0510 19:28:05.813672  459268 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/kubeconfig: {Name:mk5ad7285fe4c17b2779ea6d5a539f101fe94797 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:28:05.814006  459268 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.231 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0510 19:28:05.814204  459268 config.go:182] Loaded profile config "embed-certs-483140": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 19:28:05.814159  459268 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0510 19:28:05.814258  459268 addons.go:69] Setting default-storageclass=true in profile "embed-certs-483140"
	I0510 19:28:05.814274  459268 addons.go:69] Setting dashboard=true in profile "embed-certs-483140"
	I0510 19:28:05.814258  459268 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-483140"
	I0510 19:28:05.814294  459268 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-483140"
	I0510 19:28:05.814286  459268 addons.go:69] Setting metrics-server=true in profile "embed-certs-483140"
	W0510 19:28:05.814306  459268 addons.go:247] addon storage-provisioner should already be in state true
	I0510 19:28:05.814315  459268 addons.go:238] Setting addon metrics-server=true in "embed-certs-483140"
	W0510 19:28:05.814323  459268 addons.go:247] addon metrics-server should already be in state true
	I0510 19:28:05.814336  459268 host.go:66] Checking if "embed-certs-483140" exists ...
	I0510 19:28:05.814357  459268 host.go:66] Checking if "embed-certs-483140" exists ...
	I0510 19:28:05.814279  459268 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-483140"
	I0510 19:28:05.814296  459268 addons.go:238] Setting addon dashboard=true in "embed-certs-483140"
	W0510 19:28:05.814480  459268 addons.go:247] addon dashboard should already be in state true
	I0510 19:28:05.814522  459268 host.go:66] Checking if "embed-certs-483140" exists ...
	I0510 19:28:05.814752  459268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:28:05.814784  459268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:28:05.814801  459268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:28:05.814812  459268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:28:05.814858  459268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:28:05.814903  459268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:28:05.814860  459268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:28:05.815049  459268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:28:05.815493  459268 out.go:177] * Verifying Kubernetes components...
	I0510 19:28:05.816761  459268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 19:28:05.832190  459268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34921
	I0510 19:28:05.833019  459268 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:28:05.833618  459268 main.go:141] libmachine: Using API Version  1
	I0510 19:28:05.833652  459268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:28:05.834069  459268 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:28:05.834652  459268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:28:05.834698  459268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:28:05.835356  459268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39063
	I0510 19:28:05.835412  459268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36825
	I0510 19:28:05.835824  459268 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:28:05.835909  459268 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:28:05.836388  459268 main.go:141] libmachine: Using API Version  1
	I0510 19:28:05.836411  459268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:28:05.836524  459268 main.go:141] libmachine: Using API Version  1
	I0510 19:28:05.836544  459268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:28:05.836805  459268 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:28:05.836925  459268 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:28:05.837086  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetState
	I0510 19:28:05.837502  459268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:28:05.837542  459268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:28:05.837861  459268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45851
	I0510 19:28:05.838446  459268 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:28:05.838949  459268 main.go:141] libmachine: Using API Version  1
	I0510 19:28:05.838974  459268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:28:05.839356  459268 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:28:05.840781  459268 addons.go:238] Setting addon default-storageclass=true in "embed-certs-483140"
	W0510 19:28:05.840809  459268 addons.go:247] addon default-storageclass should already be in state true
	I0510 19:28:05.840843  459268 host.go:66] Checking if "embed-certs-483140" exists ...
	I0510 19:28:05.841225  459268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:28:05.841283  459268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:28:05.841904  459268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:28:05.841957  459268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:28:05.855806  459268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38611
	I0510 19:28:05.856498  459268 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:28:05.857301  459268 main.go:141] libmachine: Using API Version  1
	I0510 19:28:05.857333  459268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:28:05.857754  459268 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:28:05.857831  459268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39121
	I0510 19:28:05.857977  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetState
	I0510 19:28:05.858290  459268 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:28:05.858779  459268 main.go:141] libmachine: Using API Version  1
	I0510 19:28:05.858803  459268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:28:05.858874  459268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38033
	I0510 19:28:05.859327  459268 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:28:05.859538  459268 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:28:05.859968  459268 main.go:141] libmachine: Using API Version  1
	I0510 19:28:05.859992  459268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:28:05.860232  459268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:28:05.860241  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	I0510 19:28:05.860273  459268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:28:05.860355  459268 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:28:05.860496  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetState
	I0510 19:28:05.862204  459268 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0510 19:28:05.862302  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	I0510 19:28:05.863409  459268 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0510 19:28:05.863496  459268 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0510 19:28:05.863512  459268 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0510 19:28:05.863528  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:28:05.864433  459268 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0510 19:28:05.864458  459268 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0510 19:28:05.864480  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:28:05.867368  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:28:05.867845  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:28:05.867993  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:28:05.868025  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:28:05.868296  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHPort
	I0510 19:28:05.868504  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:28:05.868556  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:28:05.868574  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:28:05.868691  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHUsername
	I0510 19:28:05.868814  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHPort
	I0510 19:28:05.868850  459268 sshutil.go:53] new ssh client: &{IP:192.168.72.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/embed-certs-483140/id_rsa Username:docker}
	I0510 19:28:05.868996  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:28:05.869204  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHUsername
	I0510 19:28:05.869389  459268 sshutil.go:53] new ssh client: &{IP:192.168.72.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/embed-certs-483140/id_rsa Username:docker}
	I0510 19:28:05.883698  459268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46855
	I0510 19:28:05.884370  459268 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:28:05.884927  459268 main.go:141] libmachine: Using API Version  1
	I0510 19:28:05.884961  459268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:28:05.885393  459268 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:28:05.885620  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetState
	I0510 19:28:05.887679  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	I0510 19:28:05.889699  459268 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0510 19:28:05.889946  459268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35865
	I0510 19:28:05.890351  459268 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:28:05.890843  459268 main.go:141] libmachine: Using API Version  1
	I0510 19:28:05.890898  459268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:28:05.891281  459268 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:28:05.891485  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetState
	I0510 19:28:05.891961  459268 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0510 19:28:05.893147  459268 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0510 19:28:05.893168  459268 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0510 19:28:05.893173  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	I0510 19:28:05.893192  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:28:05.893397  459268 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0510 19:28:05.893412  459268 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0510 19:28:05.893429  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:28:05.897062  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:28:05.897408  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:28:05.897473  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:28:05.897574  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:28:05.897702  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHPort
	I0510 19:28:05.897846  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:28:05.897995  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHUsername
	I0510 19:28:05.898008  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:28:05.898040  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:28:05.898173  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHPort
	I0510 19:28:05.898163  459268 sshutil.go:53] new ssh client: &{IP:192.168.72.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/embed-certs-483140/id_rsa Username:docker}
	I0510 19:28:05.898334  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:28:05.898489  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHUsername
	I0510 19:28:05.898590  459268 sshutil.go:53] new ssh client: &{IP:192.168.72.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/embed-certs-483140/id_rsa Username:docker}
	I0510 19:28:06.110607  459268 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 19:28:06.144859  459268 node_ready.go:35] waiting up to 6m0s for node "embed-certs-483140" to be "Ready" ...
	I0510 19:28:06.150324  459268 node_ready.go:49] node "embed-certs-483140" is "Ready"
	I0510 19:28:06.150351  459268 node_ready.go:38] duration metric: took 5.421565ms for node "embed-certs-483140" to be "Ready" ...
	I0510 19:28:06.150364  459268 api_server.go:52] waiting for apiserver process to appear ...
	I0510 19:28:06.150417  459268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:06.172762  459268 api_server.go:72] duration metric: took 358.714749ms to wait for apiserver process to appear ...
	I0510 19:28:06.172794  459268 api_server.go:88] waiting for apiserver healthz status ...
	I0510 19:28:06.172815  459268 api_server.go:253] Checking apiserver healthz at https://192.168.72.231:8443/healthz ...
	I0510 19:28:06.181737  459268 api_server.go:279] https://192.168.72.231:8443/healthz returned 200:
	ok
	I0510 19:28:06.183824  459268 api_server.go:141] control plane version: v1.33.0
	I0510 19:28:06.183848  459268 api_server.go:131] duration metric: took 11.047783ms to wait for apiserver health ...
	I0510 19:28:06.183857  459268 system_pods.go:43] waiting for kube-system pods to appear ...
	I0510 19:28:06.188111  459268 system_pods.go:59] 8 kube-system pods found
	I0510 19:28:06.188145  459268 system_pods.go:61] "coredns-674b8bbfcf-4ld9c" [2af71141-c2b9-4788-8dcf-19ae78077d83] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0510 19:28:06.188156  459268 system_pods.go:61] "etcd-embed-certs-483140" [18335556-d523-4f93-9975-36c6ec710b8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0510 19:28:06.188168  459268 system_pods.go:61] "kube-apiserver-embed-certs-483140" [ccfb56df-98d8-49bd-af84-4897349b90fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0510 19:28:06.188177  459268 system_pods.go:61] "kube-controller-manager-embed-certs-483140" [3aa74b28-d50d-4a50-b222-38dea567ed3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0510 19:28:06.188184  459268 system_pods.go:61] "kube-proxy-b2gvg" [d17e7a7f-57d3-4fe4-ace9-7a2fc70bb585] Running
	I0510 19:28:06.188195  459268 system_pods.go:61] "kube-scheduler-embed-certs-483140" [1eb4348b-46a3-45d6-bd78-d5d9045b600c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0510 19:28:06.188214  459268 system_pods.go:61] "metrics-server-f79f97bbb-dbl7q" [b17e1431-b05d-4d16-8f92-46b9526e09fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0510 19:28:06.188220  459268 system_pods.go:61] "storage-provisioner" [e9b8f9e8-8add-47f3-a9a7-51fae3a958d5] Running
	I0510 19:28:06.188231  459268 system_pods.go:74] duration metric: took 4.368046ms to wait for pod list to return data ...
	I0510 19:28:06.188242  459268 default_sa.go:34] waiting for default service account to be created ...
	I0510 19:28:06.193811  459268 default_sa.go:45] found service account: "default"
	I0510 19:28:06.193846  459268 default_sa.go:55] duration metric: took 5.591706ms for default service account to be created ...
	I0510 19:28:06.193860  459268 system_pods.go:116] waiting for k8s-apps to be running ...
	I0510 19:28:06.200177  459268 system_pods.go:86] 8 kube-system pods found
	I0510 19:28:06.200220  459268 system_pods.go:89] "coredns-674b8bbfcf-4ld9c" [2af71141-c2b9-4788-8dcf-19ae78077d83] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0510 19:28:06.200233  459268 system_pods.go:89] "etcd-embed-certs-483140" [18335556-d523-4f93-9975-36c6ec710b8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0510 19:28:06.200244  459268 system_pods.go:89] "kube-apiserver-embed-certs-483140" [ccfb56df-98d8-49bd-af84-4897349b90fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0510 19:28:06.200254  459268 system_pods.go:89] "kube-controller-manager-embed-certs-483140" [3aa74b28-d50d-4a50-b222-38dea567ed3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0510 19:28:06.200260  459268 system_pods.go:89] "kube-proxy-b2gvg" [d17e7a7f-57d3-4fe4-ace9-7a2fc70bb585] Running
	I0510 19:28:06.200268  459268 system_pods.go:89] "kube-scheduler-embed-certs-483140" [1eb4348b-46a3-45d6-bd78-d5d9045b600c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0510 19:28:06.200276  459268 system_pods.go:89] "metrics-server-f79f97bbb-dbl7q" [b17e1431-b05d-4d16-8f92-46b9526e09fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0510 19:28:06.200282  459268 system_pods.go:89] "storage-provisioner" [e9b8f9e8-8add-47f3-a9a7-51fae3a958d5] Running
	I0510 19:28:06.200291  459268 system_pods.go:126] duration metric: took 6.423763ms to wait for k8s-apps to be running ...
	I0510 19:28:06.200300  459268 system_svc.go:44] waiting for kubelet service to be running ....
	I0510 19:28:06.200370  459268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0510 19:28:06.223314  459268 system_svc.go:56] duration metric: took 22.998023ms WaitForService to wait for kubelet
	I0510 19:28:06.223354  459268 kubeadm.go:578] duration metric: took 409.308651ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0510 19:28:06.223387  459268 node_conditions.go:102] verifying NodePressure condition ...
	I0510 19:28:06.232818  459268 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0510 19:28:06.232856  459268 node_conditions.go:123] node cpu capacity is 2
	I0510 19:28:06.232872  459268 node_conditions.go:105] duration metric: took 9.479043ms to run NodePressure ...
	I0510 19:28:06.232902  459268 start.go:241] waiting for startup goroutines ...
	I0510 19:28:06.266649  459268 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0510 19:28:06.266685  459268 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0510 19:28:06.302650  459268 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0510 19:28:06.334925  459268 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0510 19:28:06.334968  459268 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0510 19:28:06.361227  459268 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0510 19:28:06.415256  459268 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0510 19:28:06.415296  459268 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0510 19:28:06.419004  459268 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0510 19:28:06.419036  459268 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0510 19:28:06.550056  459268 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0510 19:28:06.550095  459268 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0510 19:28:06.551403  459268 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0510 19:28:06.551436  459268 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0510 19:28:06.652695  459268 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0510 19:28:06.652723  459268 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0510 19:28:06.732300  459268 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0510 19:28:06.732329  459268 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0510 19:28:06.812826  459268 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0510 19:28:06.812859  459268 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0510 19:28:06.814831  459268 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0510 19:28:06.941859  459268 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0510 19:28:06.941910  459268 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0510 19:28:07.112650  459268 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0510 19:28:07.112683  459268 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0510 19:28:07.230569  459268 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0510 19:28:07.230606  459268 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0510 19:28:07.348026  459268 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0510 19:28:08.311112  459268 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.008411221s)
	I0510 19:28:08.311190  459268 main.go:141] libmachine: Making call to close driver server
	I0510 19:28:08.311196  459268 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.949932076s)
	I0510 19:28:08.311207  459268 main.go:141] libmachine: (embed-certs-483140) Calling .Close
	I0510 19:28:08.311253  459268 main.go:141] libmachine: Making call to close driver server
	I0510 19:28:08.311374  459268 main.go:141] libmachine: (embed-certs-483140) Calling .Close
	I0510 19:28:08.311588  459268 main.go:141] libmachine: Successfully made call to close driver server
	I0510 19:28:08.311605  459268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 19:28:08.311650  459268 main.go:141] libmachine: (embed-certs-483140) DBG | Closing plugin on server side
	I0510 19:28:08.311673  459268 main.go:141] libmachine: Successfully made call to close driver server
	I0510 19:28:08.311684  459268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 19:28:08.311686  459268 main.go:141] libmachine: (embed-certs-483140) DBG | Closing plugin on server side
	I0510 19:28:08.311693  459268 main.go:141] libmachine: Making call to close driver server
	I0510 19:28:08.311701  459268 main.go:141] libmachine: (embed-certs-483140) Calling .Close
	I0510 19:28:08.311749  459268 main.go:141] libmachine: Making call to close driver server
	I0510 19:28:08.311769  459268 main.go:141] libmachine: (embed-certs-483140) Calling .Close
	I0510 19:28:08.311934  459268 main.go:141] libmachine: Successfully made call to close driver server
	I0510 19:28:08.311961  459268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 19:28:08.313225  459268 main.go:141] libmachine: (embed-certs-483140) DBG | Closing plugin on server side
	I0510 19:28:08.313491  459268 main.go:141] libmachine: Successfully made call to close driver server
	I0510 19:28:08.313506  459268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 19:28:08.331318  459268 main.go:141] libmachine: Making call to close driver server
	I0510 19:28:08.331353  459268 main.go:141] libmachine: (embed-certs-483140) Calling .Close
	I0510 19:28:08.331610  459268 main.go:141] libmachine: (embed-certs-483140) DBG | Closing plugin on server side
	I0510 19:28:08.331656  459268 main.go:141] libmachine: Successfully made call to close driver server
	I0510 19:28:08.331664  459268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 19:28:08.561201  459268 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.746324825s)
	I0510 19:28:08.561271  459268 main.go:141] libmachine: Making call to close driver server
	I0510 19:28:08.561285  459268 main.go:141] libmachine: (embed-certs-483140) Calling .Close
	I0510 19:28:08.561649  459268 main.go:141] libmachine: Successfully made call to close driver server
	I0510 19:28:08.561672  459268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 19:28:08.561690  459268 main.go:141] libmachine: Making call to close driver server
	I0510 19:28:08.561698  459268 main.go:141] libmachine: (embed-certs-483140) Calling .Close
	I0510 19:28:08.562030  459268 main.go:141] libmachine: (embed-certs-483140) DBG | Closing plugin on server side
	I0510 19:28:08.562077  459268 main.go:141] libmachine: Successfully made call to close driver server
	I0510 19:28:08.562088  459268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 19:28:08.562103  459268 addons.go:479] Verifying addon metrics-server=true in "embed-certs-483140"
	I0510 19:28:04.066752  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:04.567029  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:05.066242  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:05.567101  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:06.066378  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:06.566985  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:07.066671  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:07.566514  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:08.067086  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:08.566885  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:09.320104  459268 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.972016021s)
	I0510 19:28:09.320180  459268 main.go:141] libmachine: Making call to close driver server
	I0510 19:28:09.320206  459268 main.go:141] libmachine: (embed-certs-483140) Calling .Close
	I0510 19:28:09.320585  459268 main.go:141] libmachine: (embed-certs-483140) DBG | Closing plugin on server side
	I0510 19:28:09.320633  459268 main.go:141] libmachine: Successfully made call to close driver server
	I0510 19:28:09.320643  459268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 19:28:09.320652  459268 main.go:141] libmachine: Making call to close driver server
	I0510 19:28:09.320660  459268 main.go:141] libmachine: (embed-certs-483140) Calling .Close
	I0510 19:28:09.320941  459268 main.go:141] libmachine: (embed-certs-483140) DBG | Closing plugin on server side
	I0510 19:28:09.320962  459268 main.go:141] libmachine: Successfully made call to close driver server
	I0510 19:28:09.320975  459268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 19:28:09.323341  459268 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-483140 addons enable metrics-server
	
	I0510 19:28:09.324636  459268 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0510 19:28:09.325664  459268 addons.go:514] duration metric: took 3.511519103s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0510 19:28:09.325722  459268 start.go:246] waiting for cluster config update ...
	I0510 19:28:09.325741  459268 start.go:255] writing updated cluster config ...
	I0510 19:28:09.326092  459268 ssh_runner.go:195] Run: rm -f paused
	I0510 19:28:09.344642  459268 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0510 19:28:09.354144  459268 pod_ready.go:83] waiting for pod "coredns-674b8bbfcf-4ld9c" in "kube-system" namespace to be "Ready" or be gone ...
	W0510 19:28:11.360637  459268 pod_ready.go:104] pod "coredns-674b8bbfcf-4ld9c" is not "Ready", error: <nil>
	W0510 19:28:13.860282  459268 pod_ready.go:104] pod "coredns-674b8bbfcf-4ld9c" is not "Ready", error: <nil>
	I0510 19:28:09.066763  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:09.566992  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:10.066908  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:10.566843  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:11.066514  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:11.566388  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:12.066218  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:12.566934  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:13.066645  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:13.567085  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0510 19:28:15.860630  459268 pod_ready.go:104] pod "coredns-674b8bbfcf-4ld9c" is not "Ready", error: <nil>
	I0510 19:28:17.393207  459268 pod_ready.go:94] pod "coredns-674b8bbfcf-4ld9c" is "Ready"
	I0510 19:28:17.393237  459268 pod_ready.go:86] duration metric: took 8.039060776s for pod "coredns-674b8bbfcf-4ld9c" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:28:17.418993  459268 pod_ready.go:83] waiting for pod "etcd-embed-certs-483140" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:28:17.429049  459268 pod_ready.go:94] pod "etcd-embed-certs-483140" is "Ready"
	I0510 19:28:17.429081  459268 pod_ready.go:86] duration metric: took 10.055799ms for pod "etcd-embed-certs-483140" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:28:17.432083  459268 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-483140" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:28:17.437554  459268 pod_ready.go:94] pod "kube-apiserver-embed-certs-483140" is "Ready"
	I0510 19:28:17.437591  459268 pod_ready.go:86] duration metric: took 5.476778ms for pod "kube-apiserver-embed-certs-483140" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:28:17.440334  459268 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-483140" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:28:17.557594  459268 pod_ready.go:94] pod "kube-controller-manager-embed-certs-483140" is "Ready"
	I0510 19:28:17.557622  459268 pod_ready.go:86] duration metric: took 117.264734ms for pod "kube-controller-manager-embed-certs-483140" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:28:17.769743  459268 pod_ready.go:83] waiting for pod "kube-proxy-b2gvg" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:28:18.158013  459268 pod_ready.go:94] pod "kube-proxy-b2gvg" is "Ready"
	I0510 19:28:18.158042  459268 pod_ready.go:86] duration metric: took 388.270745ms for pod "kube-proxy-b2gvg" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:28:18.379133  459268 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-483140" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:28:18.758017  459268 pod_ready.go:94] pod "kube-scheduler-embed-certs-483140" is "Ready"
	I0510 19:28:18.758052  459268 pod_ready.go:86] duration metric: took 378.881401ms for pod "kube-scheduler-embed-certs-483140" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:28:18.758067  459268 pod_ready.go:40] duration metric: took 9.413376926s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0510 19:28:18.804476  459268 start.go:607] kubectl: 1.33.0, cluster: 1.33.0 (minor skew: 0)
	I0510 19:28:18.807325  459268 out.go:177] * Done! kubectl is now configured to use "embed-certs-483140" cluster and "default" namespace by default
	I0510 19:28:14.066994  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:14.567064  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:15.066411  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:15.567220  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:16.067320  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:16.566859  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:17.066625  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:17.566521  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:18.066671  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:18.566592  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:19.066253  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:19.566860  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:20.066367  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:20.567118  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:21.067193  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:21.566937  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:22.066333  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:22.567056  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:23.066988  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:23.566331  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:24.066265  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:24.566513  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:25.067048  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:25.567212  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:26.067158  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:26.566324  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:27.066325  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:27.566435  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:28.067014  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:28.566560  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:29.066490  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:29.567080  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:30.067132  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:30.566495  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:31.066973  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:31.566321  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:32.067212  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:32.566665  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:33.066716  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:33.566326  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:34.067017  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:34.566429  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:35.067039  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:35.566936  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:36.066553  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:36.566402  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:37.066800  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:37.566267  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:38.066188  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:38.567060  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:28:38.567180  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:28:38.614003  459056 cri.go:89] found id: ""
	I0510 19:28:38.614094  459056 logs.go:282] 0 containers: []
	W0510 19:28:38.614120  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:28:38.614132  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:28:38.614211  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:28:38.651000  459056 cri.go:89] found id: ""
	I0510 19:28:38.651034  459056 logs.go:282] 0 containers: []
	W0510 19:28:38.651046  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:28:38.651053  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:28:38.651121  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:28:38.688211  459056 cri.go:89] found id: ""
	I0510 19:28:38.688238  459056 logs.go:282] 0 containers: []
	W0510 19:28:38.688246  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:28:38.688252  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:28:38.688318  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:28:38.726904  459056 cri.go:89] found id: ""
	I0510 19:28:38.726933  459056 logs.go:282] 0 containers: []
	W0510 19:28:38.726953  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:28:38.726963  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:28:38.727020  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:28:38.764293  459056 cri.go:89] found id: ""
	I0510 19:28:38.764321  459056 logs.go:282] 0 containers: []
	W0510 19:28:38.764330  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:28:38.764335  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:28:38.764390  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:28:38.802044  459056 cri.go:89] found id: ""
	I0510 19:28:38.802075  459056 logs.go:282] 0 containers: []
	W0510 19:28:38.802083  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:28:38.802104  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:28:38.802160  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:28:38.840951  459056 cri.go:89] found id: ""
	I0510 19:28:38.840991  459056 logs.go:282] 0 containers: []
	W0510 19:28:38.841002  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:28:38.841010  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:28:38.841098  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:28:38.879478  459056 cri.go:89] found id: ""
	I0510 19:28:38.879514  459056 logs.go:282] 0 containers: []
	W0510 19:28:38.879522  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:28:38.879533  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:28:38.879548  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:28:38.932148  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:28:38.932193  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:28:38.947813  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:28:38.947845  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:28:39.094230  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:28:39.094264  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:28:39.094283  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:28:39.170356  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:28:39.170406  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:28:41.716545  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:41.734713  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:28:41.734791  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:28:41.772135  459056 cri.go:89] found id: ""
	I0510 19:28:41.772178  459056 logs.go:282] 0 containers: []
	W0510 19:28:41.772187  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:28:41.772193  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:28:41.772246  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:28:41.810841  459056 cri.go:89] found id: ""
	I0510 19:28:41.810875  459056 logs.go:282] 0 containers: []
	W0510 19:28:41.810886  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:28:41.810893  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:28:41.810969  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:28:41.848600  459056 cri.go:89] found id: ""
	I0510 19:28:41.848627  459056 logs.go:282] 0 containers: []
	W0510 19:28:41.848636  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:28:41.848643  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:28:41.848735  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:28:41.887214  459056 cri.go:89] found id: ""
	I0510 19:28:41.887261  459056 logs.go:282] 0 containers: []
	W0510 19:28:41.887273  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:28:41.887282  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:28:41.887353  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:28:41.926422  459056 cri.go:89] found id: ""
	I0510 19:28:41.926455  459056 logs.go:282] 0 containers: []
	W0510 19:28:41.926466  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:28:41.926474  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:28:41.926573  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:28:41.963547  459056 cri.go:89] found id: ""
	I0510 19:28:41.963582  459056 logs.go:282] 0 containers: []
	W0510 19:28:41.963595  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:28:41.963625  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:28:41.963699  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:28:42.007903  459056 cri.go:89] found id: ""
	I0510 19:28:42.007930  459056 logs.go:282] 0 containers: []
	W0510 19:28:42.007938  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:28:42.007943  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:28:42.007996  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:28:42.048020  459056 cri.go:89] found id: ""
	I0510 19:28:42.048054  459056 logs.go:282] 0 containers: []
	W0510 19:28:42.048062  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:28:42.048072  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:28:42.048085  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:28:42.099210  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:28:42.099267  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:28:42.114915  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:28:42.114947  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:28:42.196330  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:28:42.196364  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:28:42.196380  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:28:42.278729  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:28:42.278786  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:28:44.825880  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:44.844164  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:28:44.844258  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:28:44.883963  459056 cri.go:89] found id: ""
	I0510 19:28:44.883992  459056 logs.go:282] 0 containers: []
	W0510 19:28:44.884001  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:28:44.884008  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:28:44.884085  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:28:44.920183  459056 cri.go:89] found id: ""
	I0510 19:28:44.920214  459056 logs.go:282] 0 containers: []
	W0510 19:28:44.920222  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:28:44.920228  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:28:44.920304  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:28:44.956038  459056 cri.go:89] found id: ""
	I0510 19:28:44.956072  459056 logs.go:282] 0 containers: []
	W0510 19:28:44.956087  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:28:44.956093  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:28:44.956165  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:28:44.992412  459056 cri.go:89] found id: ""
	I0510 19:28:44.992448  459056 logs.go:282] 0 containers: []
	W0510 19:28:44.992460  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:28:44.992468  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:28:44.992540  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:28:45.029970  459056 cri.go:89] found id: ""
	I0510 19:28:45.030008  459056 logs.go:282] 0 containers: []
	W0510 19:28:45.030020  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:28:45.030027  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:28:45.030097  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:28:45.065606  459056 cri.go:89] found id: ""
	I0510 19:28:45.065643  459056 logs.go:282] 0 containers: []
	W0510 19:28:45.065654  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:28:45.065662  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:28:45.065736  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:28:45.102978  459056 cri.go:89] found id: ""
	I0510 19:28:45.103009  459056 logs.go:282] 0 containers: []
	W0510 19:28:45.103018  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:28:45.103024  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:28:45.103087  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:28:45.143725  459056 cri.go:89] found id: ""
	I0510 19:28:45.143752  459056 logs.go:282] 0 containers: []
	W0510 19:28:45.143761  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:28:45.143771  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:28:45.143783  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:28:45.187406  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:28:45.187443  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:28:45.237672  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:28:45.237725  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:28:45.253387  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:28:45.253425  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:28:45.326218  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:28:45.326246  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:28:45.326265  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:28:47.904696  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:47.922232  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:28:47.922326  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:28:47.964247  459056 cri.go:89] found id: ""
	I0510 19:28:47.964284  459056 logs.go:282] 0 containers: []
	W0510 19:28:47.964293  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:28:47.964299  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:28:47.964358  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:28:48.001130  459056 cri.go:89] found id: ""
	I0510 19:28:48.001159  459056 logs.go:282] 0 containers: []
	W0510 19:28:48.001167  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:28:48.001175  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:28:48.001245  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:28:48.038486  459056 cri.go:89] found id: ""
	I0510 19:28:48.038519  459056 logs.go:282] 0 containers: []
	W0510 19:28:48.038528  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:28:48.038534  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:28:48.038604  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:28:48.073594  459056 cri.go:89] found id: ""
	I0510 19:28:48.073628  459056 logs.go:282] 0 containers: []
	W0510 19:28:48.073636  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:28:48.073643  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:28:48.073716  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:28:48.113159  459056 cri.go:89] found id: ""
	I0510 19:28:48.113191  459056 logs.go:282] 0 containers: []
	W0510 19:28:48.113199  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:28:48.113205  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:28:48.113271  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:28:48.158534  459056 cri.go:89] found id: ""
	I0510 19:28:48.158570  459056 logs.go:282] 0 containers: []
	W0510 19:28:48.158581  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:28:48.158589  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:28:48.158661  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:28:48.194840  459056 cri.go:89] found id: ""
	I0510 19:28:48.194871  459056 logs.go:282] 0 containers: []
	W0510 19:28:48.194883  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:28:48.194889  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:28:48.194952  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:28:48.233411  459056 cri.go:89] found id: ""
	I0510 19:28:48.233446  459056 logs.go:282] 0 containers: []
	W0510 19:28:48.233455  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:28:48.233465  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:28:48.233481  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:28:48.248955  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:28:48.248988  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:28:48.321462  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:28:48.321486  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:28:48.321499  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:28:48.413091  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:28:48.413139  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:28:48.455370  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:28:48.455417  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:28:51.008549  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:51.026088  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:28:51.026175  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:28:51.065801  459056 cri.go:89] found id: ""
	I0510 19:28:51.065834  459056 logs.go:282] 0 containers: []
	W0510 19:28:51.065844  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:28:51.065850  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:28:51.065915  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:28:51.108971  459056 cri.go:89] found id: ""
	I0510 19:28:51.109002  459056 logs.go:282] 0 containers: []
	W0510 19:28:51.109010  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:28:51.109017  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:28:51.109081  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:28:51.153399  459056 cri.go:89] found id: ""
	I0510 19:28:51.153425  459056 logs.go:282] 0 containers: []
	W0510 19:28:51.153434  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:28:51.153440  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:28:51.153501  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:28:51.193120  459056 cri.go:89] found id: ""
	I0510 19:28:51.193150  459056 logs.go:282] 0 containers: []
	W0510 19:28:51.193159  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:28:51.193165  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:28:51.193219  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:28:51.232126  459056 cri.go:89] found id: ""
	I0510 19:28:51.232160  459056 logs.go:282] 0 containers: []
	W0510 19:28:51.232169  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:28:51.232176  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:28:51.232262  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:28:51.271265  459056 cri.go:89] found id: ""
	I0510 19:28:51.271292  459056 logs.go:282] 0 containers: []
	W0510 19:28:51.271300  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:28:51.271306  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:28:51.271380  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:28:51.314653  459056 cri.go:89] found id: ""
	I0510 19:28:51.314687  459056 logs.go:282] 0 containers: []
	W0510 19:28:51.314698  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:28:51.314710  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:28:51.314788  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:28:51.353697  459056 cri.go:89] found id: ""
	I0510 19:28:51.353726  459056 logs.go:282] 0 containers: []
	W0510 19:28:51.353734  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:28:51.353746  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:28:51.353762  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:28:51.406474  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:28:51.406515  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:28:51.423057  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:28:51.423092  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:28:51.501527  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:28:51.501551  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:28:51.501563  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:28:51.582228  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:28:51.582278  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:28:54.132967  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:54.161653  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:28:54.161729  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:28:54.201063  459056 cri.go:89] found id: ""
	I0510 19:28:54.201098  459056 logs.go:282] 0 containers: []
	W0510 19:28:54.201111  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:28:54.201120  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:28:54.201200  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:28:54.241268  459056 cri.go:89] found id: ""
	I0510 19:28:54.241298  459056 logs.go:282] 0 containers: []
	W0510 19:28:54.241307  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:28:54.241320  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:28:54.241388  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:28:54.279508  459056 cri.go:89] found id: ""
	I0510 19:28:54.279540  459056 logs.go:282] 0 containers: []
	W0510 19:28:54.279549  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:28:54.279555  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:28:54.279621  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:28:54.322256  459056 cri.go:89] found id: ""
	I0510 19:28:54.322295  459056 logs.go:282] 0 containers: []
	W0510 19:28:54.322306  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:28:54.322349  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:28:54.322423  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:28:54.360014  459056 cri.go:89] found id: ""
	I0510 19:28:54.360051  459056 logs.go:282] 0 containers: []
	W0510 19:28:54.360062  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:28:54.360071  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:28:54.360149  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:28:54.399429  459056 cri.go:89] found id: ""
	I0510 19:28:54.399462  459056 logs.go:282] 0 containers: []
	W0510 19:28:54.399473  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:28:54.399479  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:28:54.399544  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:28:54.437094  459056 cri.go:89] found id: ""
	I0510 19:28:54.437120  459056 logs.go:282] 0 containers: []
	W0510 19:28:54.437129  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:28:54.437135  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:28:54.437213  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:28:54.473964  459056 cri.go:89] found id: ""
	I0510 19:28:54.474000  459056 logs.go:282] 0 containers: []
	W0510 19:28:54.474012  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:28:54.474024  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:28:54.474037  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:28:54.526415  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:28:54.526458  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:28:54.542142  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:28:54.542177  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:28:54.618555  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:28:54.618582  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:28:54.618600  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:28:54.695979  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:28:54.696026  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:28:57.241583  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:57.259270  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:28:57.259347  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:28:57.297603  459056 cri.go:89] found id: ""
	I0510 19:28:57.297640  459056 logs.go:282] 0 containers: []
	W0510 19:28:57.297648  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:28:57.297664  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:28:57.297734  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:28:57.339031  459056 cri.go:89] found id: ""
	I0510 19:28:57.339063  459056 logs.go:282] 0 containers: []
	W0510 19:28:57.339072  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:28:57.339090  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:28:57.339167  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:28:57.375753  459056 cri.go:89] found id: ""
	I0510 19:28:57.375783  459056 logs.go:282] 0 containers: []
	W0510 19:28:57.375792  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:28:57.375799  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:28:57.375855  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:28:57.414729  459056 cri.go:89] found id: ""
	I0510 19:28:57.414758  459056 logs.go:282] 0 containers: []
	W0510 19:28:57.414770  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:28:57.414779  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:28:57.414854  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:28:57.453265  459056 cri.go:89] found id: ""
	I0510 19:28:57.453298  459056 logs.go:282] 0 containers: []
	W0510 19:28:57.453309  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:28:57.453318  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:28:57.453379  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:28:57.491548  459056 cri.go:89] found id: ""
	I0510 19:28:57.491579  459056 logs.go:282] 0 containers: []
	W0510 19:28:57.491587  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:28:57.491594  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:28:57.491670  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:28:57.529795  459056 cri.go:89] found id: ""
	I0510 19:28:57.529822  459056 logs.go:282] 0 containers: []
	W0510 19:28:57.529831  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:28:57.529837  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:28:57.529901  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:28:57.570146  459056 cri.go:89] found id: ""
	I0510 19:28:57.570177  459056 logs.go:282] 0 containers: []
	W0510 19:28:57.570186  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:28:57.570196  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:28:57.570211  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:28:57.622879  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:28:57.622928  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:28:57.639210  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:28:57.639256  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:28:57.717348  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:28:57.717382  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:28:57.717399  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:28:57.799663  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:28:57.799716  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:00.351909  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:00.369231  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:00.369300  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:00.419696  459056 cri.go:89] found id: ""
	I0510 19:29:00.419730  459056 logs.go:282] 0 containers: []
	W0510 19:29:00.419740  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:00.419747  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:00.419810  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:00.456741  459056 cri.go:89] found id: ""
	I0510 19:29:00.456847  459056 logs.go:282] 0 containers: []
	W0510 19:29:00.456865  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:00.456874  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:00.456956  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:00.495771  459056 cri.go:89] found id: ""
	I0510 19:29:00.495816  459056 logs.go:282] 0 containers: []
	W0510 19:29:00.495829  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:00.495839  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:00.495919  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:00.541754  459056 cri.go:89] found id: ""
	I0510 19:29:00.541791  459056 logs.go:282] 0 containers: []
	W0510 19:29:00.541803  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:00.541812  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:00.541892  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:00.584200  459056 cri.go:89] found id: ""
	I0510 19:29:00.584230  459056 logs.go:282] 0 containers: []
	W0510 19:29:00.584239  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:00.584245  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:00.584336  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:00.632920  459056 cri.go:89] found id: ""
	I0510 19:29:00.632949  459056 logs.go:282] 0 containers: []
	W0510 19:29:00.632960  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:00.632969  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:00.633033  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:00.684270  459056 cri.go:89] found id: ""
	I0510 19:29:00.684300  459056 logs.go:282] 0 containers: []
	W0510 19:29:00.684309  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:00.684315  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:00.684368  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:00.722259  459056 cri.go:89] found id: ""
	I0510 19:29:00.722292  459056 logs.go:282] 0 containers: []
	W0510 19:29:00.722301  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:00.722311  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:00.722328  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:00.737395  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:00.737431  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:00.816432  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:00.816465  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:00.816485  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:00.900576  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:00.900631  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:00.946239  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:00.946285  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:03.499135  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:03.516795  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:03.516874  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:03.561554  459056 cri.go:89] found id: ""
	I0510 19:29:03.561589  459056 logs.go:282] 0 containers: []
	W0510 19:29:03.561599  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:03.561607  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:03.561674  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:03.604183  459056 cri.go:89] found id: ""
	I0510 19:29:03.604213  459056 logs.go:282] 0 containers: []
	W0510 19:29:03.604221  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:03.604227  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:03.604297  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:03.641319  459056 cri.go:89] found id: ""
	I0510 19:29:03.641350  459056 logs.go:282] 0 containers: []
	W0510 19:29:03.641359  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:03.641366  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:03.641431  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:03.679306  459056 cri.go:89] found id: ""
	I0510 19:29:03.679345  459056 logs.go:282] 0 containers: []
	W0510 19:29:03.679356  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:03.679364  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:03.679444  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:03.720380  459056 cri.go:89] found id: ""
	I0510 19:29:03.720412  459056 logs.go:282] 0 containers: []
	W0510 19:29:03.720420  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:03.720426  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:03.720497  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:03.758115  459056 cri.go:89] found id: ""
	I0510 19:29:03.758183  459056 logs.go:282] 0 containers: []
	W0510 19:29:03.758193  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:03.758206  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:03.758283  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:03.797182  459056 cri.go:89] found id: ""
	I0510 19:29:03.797215  459056 logs.go:282] 0 containers: []
	W0510 19:29:03.797226  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:03.797235  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:03.797294  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:03.837236  459056 cri.go:89] found id: ""
	I0510 19:29:03.837266  459056 logs.go:282] 0 containers: []
	W0510 19:29:03.837274  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:03.837284  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:03.837302  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:03.886362  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:03.886412  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:03.902546  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:03.902581  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:03.980181  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:03.980206  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:03.980219  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:04.060587  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:04.060641  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:06.606310  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:06.633919  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:06.634001  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:06.672938  459056 cri.go:89] found id: ""
	I0510 19:29:06.672969  459056 logs.go:282] 0 containers: []
	W0510 19:29:06.672978  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:06.672986  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:06.673047  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:06.711567  459056 cri.go:89] found id: ""
	I0510 19:29:06.711603  459056 logs.go:282] 0 containers: []
	W0510 19:29:06.711615  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:06.711624  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:06.711710  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:06.752456  459056 cri.go:89] found id: ""
	I0510 19:29:06.752498  459056 logs.go:282] 0 containers: []
	W0510 19:29:06.752510  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:06.752520  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:06.752592  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:06.792722  459056 cri.go:89] found id: ""
	I0510 19:29:06.792755  459056 logs.go:282] 0 containers: []
	W0510 19:29:06.792764  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:06.792771  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:06.792832  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:06.833199  459056 cri.go:89] found id: ""
	I0510 19:29:06.833231  459056 logs.go:282] 0 containers: []
	W0510 19:29:06.833239  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:06.833246  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:06.833300  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:06.871347  459056 cri.go:89] found id: ""
	I0510 19:29:06.871378  459056 logs.go:282] 0 containers: []
	W0510 19:29:06.871386  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:06.871393  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:06.871448  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:06.909447  459056 cri.go:89] found id: ""
	I0510 19:29:06.909478  459056 logs.go:282] 0 containers: []
	W0510 19:29:06.909489  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:06.909497  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:06.909561  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:06.945795  459056 cri.go:89] found id: ""
	I0510 19:29:06.945829  459056 logs.go:282] 0 containers: []
	W0510 19:29:06.945837  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:06.945847  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:06.945861  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:07.028777  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:07.028825  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:07.070640  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:07.070673  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:07.124335  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:07.124383  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:07.140167  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:07.140197  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:07.218319  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:09.718885  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:09.737619  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:09.737701  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:09.775164  459056 cri.go:89] found id: ""
	I0510 19:29:09.775203  459056 logs.go:282] 0 containers: []
	W0510 19:29:09.775211  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:09.775218  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:09.775292  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:09.819357  459056 cri.go:89] found id: ""
	I0510 19:29:09.819395  459056 logs.go:282] 0 containers: []
	W0510 19:29:09.819406  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:09.819415  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:09.819490  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:09.858894  459056 cri.go:89] found id: ""
	I0510 19:29:09.858928  459056 logs.go:282] 0 containers: []
	W0510 19:29:09.858937  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:09.858942  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:09.858996  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:09.895496  459056 cri.go:89] found id: ""
	I0510 19:29:09.895543  459056 logs.go:282] 0 containers: []
	W0510 19:29:09.895554  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:09.895562  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:09.895629  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:09.935443  459056 cri.go:89] found id: ""
	I0510 19:29:09.935476  459056 logs.go:282] 0 containers: []
	W0510 19:29:09.935484  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:09.935490  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:09.935552  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:09.975013  459056 cri.go:89] found id: ""
	I0510 19:29:09.975050  459056 logs.go:282] 0 containers: []
	W0510 19:29:09.975059  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:09.975066  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:09.975122  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:10.017332  459056 cri.go:89] found id: ""
	I0510 19:29:10.017364  459056 logs.go:282] 0 containers: []
	W0510 19:29:10.017372  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:10.017378  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:10.017432  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:10.054109  459056 cri.go:89] found id: ""
	I0510 19:29:10.054145  459056 logs.go:282] 0 containers: []
	W0510 19:29:10.054157  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:10.054169  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:10.054187  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:10.107219  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:10.107275  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:10.122900  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:10.122946  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:10.197374  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:10.197402  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:10.197423  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:10.276176  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:10.276222  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:12.822189  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:12.839516  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:12.839586  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:12.876495  459056 cri.go:89] found id: ""
	I0510 19:29:12.876532  459056 logs.go:282] 0 containers: []
	W0510 19:29:12.876544  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:12.876553  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:12.876628  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:12.914537  459056 cri.go:89] found id: ""
	I0510 19:29:12.914571  459056 logs.go:282] 0 containers: []
	W0510 19:29:12.914581  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:12.914587  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:12.914662  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:12.953369  459056 cri.go:89] found id: ""
	I0510 19:29:12.953403  459056 logs.go:282] 0 containers: []
	W0510 19:29:12.953412  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:12.953418  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:12.953475  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:12.991117  459056 cri.go:89] found id: ""
	I0510 19:29:12.991150  459056 logs.go:282] 0 containers: []
	W0510 19:29:12.991159  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:12.991167  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:12.991226  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:13.035209  459056 cri.go:89] found id: ""
	I0510 19:29:13.035268  459056 logs.go:282] 0 containers: []
	W0510 19:29:13.035281  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:13.035290  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:13.035364  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:13.072156  459056 cri.go:89] found id: ""
	I0510 19:29:13.072191  459056 logs.go:282] 0 containers: []
	W0510 19:29:13.072203  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:13.072211  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:13.072279  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:13.108863  459056 cri.go:89] found id: ""
	I0510 19:29:13.108893  459056 logs.go:282] 0 containers: []
	W0510 19:29:13.108903  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:13.108910  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:13.108967  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:13.155406  459056 cri.go:89] found id: ""
	I0510 19:29:13.155437  459056 logs.go:282] 0 containers: []
	W0510 19:29:13.155445  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:13.155455  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:13.155467  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:13.208638  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:13.208694  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:13.225071  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:13.225107  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:13.300472  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:13.300498  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:13.300515  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:13.380669  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:13.380714  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:15.924108  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:15.941384  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:15.941465  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:15.984230  459056 cri.go:89] found id: ""
	I0510 19:29:15.984259  459056 logs.go:282] 0 containers: []
	W0510 19:29:15.984267  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:15.984273  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:15.984328  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:16.022696  459056 cri.go:89] found id: ""
	I0510 19:29:16.022725  459056 logs.go:282] 0 containers: []
	W0510 19:29:16.022733  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:16.022740  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:16.022818  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:16.064311  459056 cri.go:89] found id: ""
	I0510 19:29:16.064344  459056 logs.go:282] 0 containers: []
	W0510 19:29:16.064356  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:16.064364  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:16.064432  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:16.110646  459056 cri.go:89] found id: ""
	I0510 19:29:16.110680  459056 logs.go:282] 0 containers: []
	W0510 19:29:16.110688  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:16.110695  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:16.110779  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:16.155423  459056 cri.go:89] found id: ""
	I0510 19:29:16.155466  459056 logs.go:282] 0 containers: []
	W0510 19:29:16.155478  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:16.155485  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:16.155560  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:16.199404  459056 cri.go:89] found id: ""
	I0510 19:29:16.199437  459056 logs.go:282] 0 containers: []
	W0510 19:29:16.199445  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:16.199455  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:16.199518  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:16.244501  459056 cri.go:89] found id: ""
	I0510 19:29:16.244532  459056 logs.go:282] 0 containers: []
	W0510 19:29:16.244541  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:16.244547  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:16.244622  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:16.289564  459056 cri.go:89] found id: ""
	I0510 19:29:16.289594  459056 logs.go:282] 0 containers: []
	W0510 19:29:16.289609  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:16.289628  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:16.289645  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:16.339326  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:16.339360  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:16.392002  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:16.392050  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:16.408009  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:16.408039  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:16.480932  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:16.480959  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:16.480972  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:19.062321  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:19.079587  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:19.079667  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:19.122776  459056 cri.go:89] found id: ""
	I0510 19:29:19.122809  459056 logs.go:282] 0 containers: []
	W0510 19:29:19.122817  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:19.122823  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:19.122882  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:19.160116  459056 cri.go:89] found id: ""
	I0510 19:29:19.160154  459056 logs.go:282] 0 containers: []
	W0510 19:29:19.160166  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:19.160175  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:19.160258  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:19.198049  459056 cri.go:89] found id: ""
	I0510 19:29:19.198081  459056 logs.go:282] 0 containers: []
	W0510 19:29:19.198089  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:19.198095  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:19.198151  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:19.236547  459056 cri.go:89] found id: ""
	I0510 19:29:19.236578  459056 logs.go:282] 0 containers: []
	W0510 19:29:19.236587  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:19.236596  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:19.236682  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:19.274688  459056 cri.go:89] found id: ""
	I0510 19:29:19.274727  459056 logs.go:282] 0 containers: []
	W0510 19:29:19.274738  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:19.274746  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:19.274819  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:19.317813  459056 cri.go:89] found id: ""
	I0510 19:29:19.317843  459056 logs.go:282] 0 containers: []
	W0510 19:29:19.317853  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:19.317865  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:19.317934  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:19.360619  459056 cri.go:89] found id: ""
	I0510 19:29:19.360654  459056 logs.go:282] 0 containers: []
	W0510 19:29:19.360663  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:19.360669  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:19.360735  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:19.399001  459056 cri.go:89] found id: ""
	I0510 19:29:19.399030  459056 logs.go:282] 0 containers: []
	W0510 19:29:19.399038  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:19.399048  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:19.399061  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:19.482768  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:19.482819  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:19.525273  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:19.525316  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:19.579149  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:19.579197  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:19.594813  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:19.594853  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:19.667950  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:22.169701  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:22.187665  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:22.187746  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:22.227992  459056 cri.go:89] found id: ""
	I0510 19:29:22.228022  459056 logs.go:282] 0 containers: []
	W0510 19:29:22.228030  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:22.228041  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:22.228164  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:22.267106  459056 cri.go:89] found id: ""
	I0510 19:29:22.267140  459056 logs.go:282] 0 containers: []
	W0510 19:29:22.267149  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:22.267155  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:22.267211  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:22.305600  459056 cri.go:89] found id: ""
	I0510 19:29:22.305628  459056 logs.go:282] 0 containers: []
	W0510 19:29:22.305636  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:22.305643  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:22.305711  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:22.345673  459056 cri.go:89] found id: ""
	I0510 19:29:22.345708  459056 logs.go:282] 0 containers: []
	W0510 19:29:22.345719  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:22.345724  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:22.345778  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:22.384325  459056 cri.go:89] found id: ""
	I0510 19:29:22.384358  459056 logs.go:282] 0 containers: []
	W0510 19:29:22.384371  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:22.384387  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:22.384467  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:22.424747  459056 cri.go:89] found id: ""
	I0510 19:29:22.424779  459056 logs.go:282] 0 containers: []
	W0510 19:29:22.424787  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:22.424794  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:22.424848  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:22.470878  459056 cri.go:89] found id: ""
	I0510 19:29:22.470916  459056 logs.go:282] 0 containers: []
	W0510 19:29:22.470929  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:22.470937  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:22.471010  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:22.515651  459056 cri.go:89] found id: ""
	I0510 19:29:22.515682  459056 logs.go:282] 0 containers: []
	W0510 19:29:22.515693  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:22.515713  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:22.515730  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:22.573654  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:22.573699  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:22.590599  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:22.590637  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:22.670834  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:22.670866  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:22.670882  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:22.754958  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:22.755019  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:25.299898  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:25.317959  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:25.318047  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:25.358952  459056 cri.go:89] found id: ""
	I0510 19:29:25.358990  459056 logs.go:282] 0 containers: []
	W0510 19:29:25.358999  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:25.359005  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:25.359068  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:25.402269  459056 cri.go:89] found id: ""
	I0510 19:29:25.402300  459056 logs.go:282] 0 containers: []
	W0510 19:29:25.402308  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:25.402321  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:25.402402  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:25.441309  459056 cri.go:89] found id: ""
	I0510 19:29:25.441338  459056 logs.go:282] 0 containers: []
	W0510 19:29:25.441348  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:25.441357  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:25.441421  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:25.477026  459056 cri.go:89] found id: ""
	I0510 19:29:25.477073  459056 logs.go:282] 0 containers: []
	W0510 19:29:25.477087  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:25.477095  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:25.477168  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:25.514227  459056 cri.go:89] found id: ""
	I0510 19:29:25.514263  459056 logs.go:282] 0 containers: []
	W0510 19:29:25.514274  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:25.514283  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:25.514357  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:25.552961  459056 cri.go:89] found id: ""
	I0510 19:29:25.552993  459056 logs.go:282] 0 containers: []
	W0510 19:29:25.553002  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:25.553010  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:25.553075  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:25.591284  459056 cri.go:89] found id: ""
	I0510 19:29:25.591315  459056 logs.go:282] 0 containers: []
	W0510 19:29:25.591327  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:25.591336  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:25.591404  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:25.631688  459056 cri.go:89] found id: ""
	I0510 19:29:25.631720  459056 logs.go:282] 0 containers: []
	W0510 19:29:25.631728  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:25.631737  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:25.631750  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:25.686015  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:25.686057  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:25.702233  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:25.702271  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:25.777340  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:25.777373  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:25.777389  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:25.857072  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:25.857118  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:28.400902  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:28.418498  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:28.418570  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:28.454908  459056 cri.go:89] found id: ""
	I0510 19:29:28.454941  459056 logs.go:282] 0 containers: []
	W0510 19:29:28.454950  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:28.454956  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:28.455014  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:28.493646  459056 cri.go:89] found id: ""
	I0510 19:29:28.493682  459056 logs.go:282] 0 containers: []
	W0510 19:29:28.493691  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:28.493700  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:28.493766  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:28.531482  459056 cri.go:89] found id: ""
	I0510 19:29:28.531524  459056 logs.go:282] 0 containers: []
	W0510 19:29:28.531537  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:28.531546  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:28.531618  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:28.568042  459056 cri.go:89] found id: ""
	I0510 19:29:28.568078  459056 logs.go:282] 0 containers: []
	W0510 19:29:28.568087  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:28.568093  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:28.568150  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:28.607141  459056 cri.go:89] found id: ""
	I0510 19:29:28.607172  459056 logs.go:282] 0 containers: []
	W0510 19:29:28.607181  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:28.607187  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:28.607271  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:28.645485  459056 cri.go:89] found id: ""
	I0510 19:29:28.645519  459056 logs.go:282] 0 containers: []
	W0510 19:29:28.645532  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:28.645544  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:28.645618  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:28.685596  459056 cri.go:89] found id: ""
	I0510 19:29:28.685638  459056 logs.go:282] 0 containers: []
	W0510 19:29:28.685649  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:28.685657  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:28.685724  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:28.724977  459056 cri.go:89] found id: ""
	I0510 19:29:28.725005  459056 logs.go:282] 0 containers: []
	W0510 19:29:28.725013  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:28.725023  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:28.725101  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:28.777421  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:28.777476  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:28.793767  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:28.793806  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:28.865581  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:28.865611  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:28.865638  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:28.945845  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:28.945895  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:31.491500  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:31.508822  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:31.508896  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:31.546371  459056 cri.go:89] found id: ""
	I0510 19:29:31.546400  459056 logs.go:282] 0 containers: []
	W0510 19:29:31.546412  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:31.546420  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:31.546478  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:31.588214  459056 cri.go:89] found id: ""
	I0510 19:29:31.588244  459056 logs.go:282] 0 containers: []
	W0510 19:29:31.588252  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:31.588258  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:31.588313  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:31.626683  459056 cri.go:89] found id: ""
	I0510 19:29:31.626718  459056 logs.go:282] 0 containers: []
	W0510 19:29:31.626729  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:31.626737  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:31.626810  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:31.665979  459056 cri.go:89] found id: ""
	I0510 19:29:31.666013  459056 logs.go:282] 0 containers: []
	W0510 19:29:31.666023  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:31.666030  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:31.666087  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:31.702718  459056 cri.go:89] found id: ""
	I0510 19:29:31.702751  459056 logs.go:282] 0 containers: []
	W0510 19:29:31.702767  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:31.702775  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:31.702830  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:31.740496  459056 cri.go:89] found id: ""
	I0510 19:29:31.740530  459056 logs.go:282] 0 containers: []
	W0510 19:29:31.740553  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:31.740561  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:31.740616  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:31.782178  459056 cri.go:89] found id: ""
	I0510 19:29:31.782209  459056 logs.go:282] 0 containers: []
	W0510 19:29:31.782218  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:31.782224  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:31.782278  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:31.817466  459056 cri.go:89] found id: ""
	I0510 19:29:31.817495  459056 logs.go:282] 0 containers: []
	W0510 19:29:31.817503  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:31.817512  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:31.817527  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:31.832641  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:31.832675  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:31.913719  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:31.913745  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:31.913764  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:31.990267  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:31.990316  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:32.033353  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:32.033384  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:34.586504  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:34.606546  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:34.606628  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:34.644492  459056 cri.go:89] found id: ""
	I0510 19:29:34.644526  459056 logs.go:282] 0 containers: []
	W0510 19:29:34.644539  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:34.644547  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:34.644616  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:34.684520  459056 cri.go:89] found id: ""
	I0510 19:29:34.684550  459056 logs.go:282] 0 containers: []
	W0510 19:29:34.684566  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:34.684572  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:34.684627  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:34.722015  459056 cri.go:89] found id: ""
	I0510 19:29:34.722047  459056 logs.go:282] 0 containers: []
	W0510 19:29:34.722055  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:34.722062  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:34.722118  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:34.760175  459056 cri.go:89] found id: ""
	I0510 19:29:34.760203  459056 logs.go:282] 0 containers: []
	W0510 19:29:34.760212  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:34.760219  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:34.760291  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:34.797742  459056 cri.go:89] found id: ""
	I0510 19:29:34.797775  459056 logs.go:282] 0 containers: []
	W0510 19:29:34.797787  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:34.797796  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:34.797870  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:34.834792  459056 cri.go:89] found id: ""
	I0510 19:29:34.834824  459056 logs.go:282] 0 containers: []
	W0510 19:29:34.834832  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:34.834839  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:34.834905  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:34.881683  459056 cri.go:89] found id: ""
	I0510 19:29:34.881720  459056 logs.go:282] 0 containers: []
	W0510 19:29:34.881729  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:34.881738  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:34.881815  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:34.925574  459056 cri.go:89] found id: ""
	I0510 19:29:34.925605  459056 logs.go:282] 0 containers: []
	W0510 19:29:34.925613  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:34.925622  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:34.925636  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:34.977426  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:34.977477  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:34.993190  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:34.993226  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:35.071565  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:35.071590  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:35.071604  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:35.149510  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:35.149563  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:37.697052  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:37.714716  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:37.714828  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:37.752850  459056 cri.go:89] found id: ""
	I0510 19:29:37.752896  459056 logs.go:282] 0 containers: []
	W0510 19:29:37.752909  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:37.752916  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:37.752989  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:37.791810  459056 cri.go:89] found id: ""
	I0510 19:29:37.791847  459056 logs.go:282] 0 containers: []
	W0510 19:29:37.791860  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:37.791868  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:37.791929  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:37.831622  459056 cri.go:89] found id: ""
	I0510 19:29:37.831658  459056 logs.go:282] 0 containers: []
	W0510 19:29:37.831669  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:37.831677  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:37.831755  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:37.873390  459056 cri.go:89] found id: ""
	I0510 19:29:37.873419  459056 logs.go:282] 0 containers: []
	W0510 19:29:37.873427  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:37.873434  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:37.873493  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:37.915385  459056 cri.go:89] found id: ""
	I0510 19:29:37.915421  459056 logs.go:282] 0 containers: []
	W0510 19:29:37.915431  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:37.915439  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:37.915517  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:37.953620  459056 cri.go:89] found id: ""
	I0510 19:29:37.953654  459056 logs.go:282] 0 containers: []
	W0510 19:29:37.953666  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:37.953678  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:37.953772  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:37.991282  459056 cri.go:89] found id: ""
	I0510 19:29:37.991315  459056 logs.go:282] 0 containers: []
	W0510 19:29:37.991328  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:37.991338  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:37.991413  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:38.028482  459056 cri.go:89] found id: ""
	I0510 19:29:38.028520  459056 logs.go:282] 0 containers: []
	W0510 19:29:38.028531  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:38.028545  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:38.028563  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:38.083448  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:38.083506  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:38.099016  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:38.099067  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:38.174538  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:38.174572  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:38.174587  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:38.258394  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:38.258443  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:40.803473  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:40.821814  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:40.821912  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:40.860566  459056 cri.go:89] found id: ""
	I0510 19:29:40.860600  459056 logs.go:282] 0 containers: []
	W0510 19:29:40.860612  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:40.860622  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:40.860683  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:40.897132  459056 cri.go:89] found id: ""
	I0510 19:29:40.897161  459056 logs.go:282] 0 containers: []
	W0510 19:29:40.897169  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:40.897177  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:40.897239  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:40.944583  459056 cri.go:89] found id: ""
	I0510 19:29:40.944622  459056 logs.go:282] 0 containers: []
	W0510 19:29:40.944636  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:40.944645  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:40.944715  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:40.983132  459056 cri.go:89] found id: ""
	I0510 19:29:40.983165  459056 logs.go:282] 0 containers: []
	W0510 19:29:40.983176  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:40.983185  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:40.983283  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:41.020441  459056 cri.go:89] found id: ""
	I0510 19:29:41.020477  459056 logs.go:282] 0 containers: []
	W0510 19:29:41.020486  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:41.020494  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:41.020548  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:41.058522  459056 cri.go:89] found id: ""
	I0510 19:29:41.058562  459056 logs.go:282] 0 containers: []
	W0510 19:29:41.058572  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:41.058579  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:41.058635  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:41.098730  459056 cri.go:89] found id: ""
	I0510 19:29:41.098775  459056 logs.go:282] 0 containers: []
	W0510 19:29:41.098785  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:41.098792  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:41.098854  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:41.139270  459056 cri.go:89] found id: ""
	I0510 19:29:41.139302  459056 logs.go:282] 0 containers: []
	W0510 19:29:41.139310  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:41.139322  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:41.139335  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:41.215383  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:41.215434  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:41.258268  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:41.258314  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:41.313241  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:41.313287  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:41.332109  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:41.332148  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:41.433376  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:43.935156  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:43.953570  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:43.953694  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:43.994014  459056 cri.go:89] found id: ""
	I0510 19:29:43.994049  459056 logs.go:282] 0 containers: []
	W0510 19:29:43.994075  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:43.994083  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:43.994158  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:44.033884  459056 cri.go:89] found id: ""
	I0510 19:29:44.033922  459056 logs.go:282] 0 containers: []
	W0510 19:29:44.033932  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:44.033942  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:44.033999  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:44.075902  459056 cri.go:89] found id: ""
	I0510 19:29:44.075941  459056 logs.go:282] 0 containers: []
	W0510 19:29:44.075950  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:44.075956  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:44.076018  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:44.116711  459056 cri.go:89] found id: ""
	I0510 19:29:44.116745  459056 logs.go:282] 0 containers: []
	W0510 19:29:44.116757  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:44.116779  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:44.116853  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:44.157617  459056 cri.go:89] found id: ""
	I0510 19:29:44.157652  459056 logs.go:282] 0 containers: []
	W0510 19:29:44.157661  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:44.157668  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:44.157727  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:44.197634  459056 cri.go:89] found id: ""
	I0510 19:29:44.197671  459056 logs.go:282] 0 containers: []
	W0510 19:29:44.197679  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:44.197685  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:44.197743  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:44.235756  459056 cri.go:89] found id: ""
	I0510 19:29:44.235797  459056 logs.go:282] 0 containers: []
	W0510 19:29:44.235810  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:44.235818  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:44.235879  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:44.274251  459056 cri.go:89] found id: ""
	I0510 19:29:44.274292  459056 logs.go:282] 0 containers: []
	W0510 19:29:44.274305  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:44.274317  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:44.274337  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:44.318629  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:44.318669  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:44.370941  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:44.370987  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:44.386660  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:44.386697  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:44.463056  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:44.463085  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:44.463103  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:47.046858  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:47.068619  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:47.068705  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:47.119108  459056 cri.go:89] found id: ""
	I0510 19:29:47.119138  459056 logs.go:282] 0 containers: []
	W0510 19:29:47.119148  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:47.119154  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:47.119210  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:47.160941  459056 cri.go:89] found id: ""
	I0510 19:29:47.160974  459056 logs.go:282] 0 containers: []
	W0510 19:29:47.160982  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:47.160988  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:47.161050  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:47.210420  459056 cri.go:89] found id: ""
	I0510 19:29:47.210452  459056 logs.go:282] 0 containers: []
	W0510 19:29:47.210460  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:47.210466  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:47.210520  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:47.250554  459056 cri.go:89] found id: ""
	I0510 19:29:47.250591  459056 logs.go:282] 0 containers: []
	W0510 19:29:47.250600  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:47.250612  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:47.250674  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:47.290621  459056 cri.go:89] found id: ""
	I0510 19:29:47.290656  459056 logs.go:282] 0 containers: []
	W0510 19:29:47.290667  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:47.290676  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:47.290749  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:47.331044  459056 cri.go:89] found id: ""
	I0510 19:29:47.331079  459056 logs.go:282] 0 containers: []
	W0510 19:29:47.331091  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:47.331100  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:47.331162  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:47.369926  459056 cri.go:89] found id: ""
	I0510 19:29:47.369958  459056 logs.go:282] 0 containers: []
	W0510 19:29:47.369967  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:47.369973  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:47.370047  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:47.410658  459056 cri.go:89] found id: ""
	I0510 19:29:47.410699  459056 logs.go:282] 0 containers: []
	W0510 19:29:47.410708  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:47.410723  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:47.410737  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:47.489045  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:47.489100  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:47.536078  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:47.536117  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:47.588663  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:47.588727  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:47.606182  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:47.606220  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:47.680331  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:50.180849  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:50.198636  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:50.198740  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:50.238270  459056 cri.go:89] found id: ""
	I0510 19:29:50.238301  459056 logs.go:282] 0 containers: []
	W0510 19:29:50.238314  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:50.238323  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:50.238399  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:50.276207  459056 cri.go:89] found id: ""
	I0510 19:29:50.276244  459056 logs.go:282] 0 containers: []
	W0510 19:29:50.276256  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:50.276264  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:50.276333  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:50.311826  459056 cri.go:89] found id: ""
	I0510 19:29:50.311864  459056 logs.go:282] 0 containers: []
	W0510 19:29:50.311875  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:50.311884  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:50.311961  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:50.347100  459056 cri.go:89] found id: ""
	I0510 19:29:50.347133  459056 logs.go:282] 0 containers: []
	W0510 19:29:50.347142  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:50.347151  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:50.347229  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:50.382788  459056 cri.go:89] found id: ""
	I0510 19:29:50.382816  459056 logs.go:282] 0 containers: []
	W0510 19:29:50.382824  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:50.382830  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:50.382898  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:50.420656  459056 cri.go:89] found id: ""
	I0510 19:29:50.420700  459056 logs.go:282] 0 containers: []
	W0510 19:29:50.420709  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:50.420722  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:50.420782  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:50.460911  459056 cri.go:89] found id: ""
	I0510 19:29:50.460948  459056 logs.go:282] 0 containers: []
	W0510 19:29:50.460956  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:50.460962  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:50.461016  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:50.498074  459056 cri.go:89] found id: ""
	I0510 19:29:50.498109  459056 logs.go:282] 0 containers: []
	W0510 19:29:50.498122  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:50.498135  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:50.498152  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:50.576436  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:50.576486  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:50.620554  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:50.620594  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:50.672242  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:50.672292  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:50.688401  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:50.688435  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:50.765125  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:53.266941  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:53.285235  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:53.285306  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:53.327821  459056 cri.go:89] found id: ""
	I0510 19:29:53.327872  459056 logs.go:282] 0 containers: []
	W0510 19:29:53.327880  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:53.327888  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:53.327971  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:53.367170  459056 cri.go:89] found id: ""
	I0510 19:29:53.367212  459056 logs.go:282] 0 containers: []
	W0510 19:29:53.367224  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:53.367254  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:53.367338  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:53.411071  459056 cri.go:89] found id: ""
	I0510 19:29:53.411104  459056 logs.go:282] 0 containers: []
	W0510 19:29:53.411112  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:53.411119  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:53.411194  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:53.451093  459056 cri.go:89] found id: ""
	I0510 19:29:53.451160  459056 logs.go:282] 0 containers: []
	W0510 19:29:53.451175  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:53.451184  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:53.451278  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:53.490305  459056 cri.go:89] found id: ""
	I0510 19:29:53.490337  459056 logs.go:282] 0 containers: []
	W0510 19:29:53.490345  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:53.490351  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:53.490421  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:53.529657  459056 cri.go:89] found id: ""
	I0510 19:29:53.529703  459056 logs.go:282] 0 containers: []
	W0510 19:29:53.529716  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:53.529728  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:53.529801  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:53.570169  459056 cri.go:89] found id: ""
	I0510 19:29:53.570211  459056 logs.go:282] 0 containers: []
	W0510 19:29:53.570223  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:53.570232  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:53.570300  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:53.613547  459056 cri.go:89] found id: ""
	I0510 19:29:53.613576  459056 logs.go:282] 0 containers: []
	W0510 19:29:53.613584  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:53.613593  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:53.613607  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:53.665574  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:53.665633  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:53.682279  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:53.682319  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:53.760795  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:53.760824  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:53.760843  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:53.844386  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:53.844433  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:56.398332  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:56.416456  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:56.416552  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:56.454203  459056 cri.go:89] found id: ""
	I0510 19:29:56.454240  459056 logs.go:282] 0 containers: []
	W0510 19:29:56.454254  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:56.454265  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:56.454350  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:56.492744  459056 cri.go:89] found id: ""
	I0510 19:29:56.492779  459056 logs.go:282] 0 containers: []
	W0510 19:29:56.492791  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:56.492799  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:56.492893  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:56.529891  459056 cri.go:89] found id: ""
	I0510 19:29:56.529924  459056 logs.go:282] 0 containers: []
	W0510 19:29:56.529933  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:56.529943  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:56.530000  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:56.566697  459056 cri.go:89] found id: ""
	I0510 19:29:56.566732  459056 logs.go:282] 0 containers: []
	W0510 19:29:56.566743  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:56.566752  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:56.566816  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:56.608258  459056 cri.go:89] found id: ""
	I0510 19:29:56.608295  459056 logs.go:282] 0 containers: []
	W0510 19:29:56.608307  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:56.608315  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:56.608384  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:56.648700  459056 cri.go:89] found id: ""
	I0510 19:29:56.648734  459056 logs.go:282] 0 containers: []
	W0510 19:29:56.648746  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:56.648755  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:56.648823  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:56.686623  459056 cri.go:89] found id: ""
	I0510 19:29:56.686661  459056 logs.go:282] 0 containers: []
	W0510 19:29:56.686672  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:56.686680  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:56.686750  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:56.726136  459056 cri.go:89] found id: ""
	I0510 19:29:56.726165  459056 logs.go:282] 0 containers: []
	W0510 19:29:56.726180  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:56.726193  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:56.726209  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:56.777146  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:56.777195  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:56.793496  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:56.793530  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:56.866401  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:56.866436  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:56.866452  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:56.944116  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:56.944168  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:59.488989  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:59.506161  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:59.506233  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:59.542854  459056 cri.go:89] found id: ""
	I0510 19:29:59.542891  459056 logs.go:282] 0 containers: []
	W0510 19:29:59.542900  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:59.542907  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:59.542961  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:59.580216  459056 cri.go:89] found id: ""
	I0510 19:29:59.580257  459056 logs.go:282] 0 containers: []
	W0510 19:29:59.580268  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:59.580276  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:59.580348  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:59.623729  459056 cri.go:89] found id: ""
	I0510 19:29:59.623770  459056 logs.go:282] 0 containers: []
	W0510 19:29:59.623781  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:59.623790  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:59.623854  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:59.662414  459056 cri.go:89] found id: ""
	I0510 19:29:59.662447  459056 logs.go:282] 0 containers: []
	W0510 19:29:59.662455  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:59.662462  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:59.662531  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:59.700471  459056 cri.go:89] found id: ""
	I0510 19:29:59.700505  459056 logs.go:282] 0 containers: []
	W0510 19:29:59.700514  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:59.700520  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:59.700593  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:59.740841  459056 cri.go:89] found id: ""
	I0510 19:29:59.740876  459056 logs.go:282] 0 containers: []
	W0510 19:29:59.740884  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:59.740891  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:59.740944  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:59.782895  459056 cri.go:89] found id: ""
	I0510 19:29:59.782937  459056 logs.go:282] 0 containers: []
	W0510 19:29:59.782946  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:59.782952  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:59.783021  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:59.820556  459056 cri.go:89] found id: ""
	I0510 19:29:59.820591  459056 logs.go:282] 0 containers: []
	W0510 19:29:59.820603  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:59.820615  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:59.820632  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:59.835555  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:59.835591  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:59.907710  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:59.907742  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:59.907758  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:59.983847  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:59.983895  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:00.030738  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:00.030782  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:02.583146  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:02.601217  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:02.601290  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:02.638485  459056 cri.go:89] found id: ""
	I0510 19:30:02.638523  459056 logs.go:282] 0 containers: []
	W0510 19:30:02.638536  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:02.638544  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:02.638625  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:02.676096  459056 cri.go:89] found id: ""
	I0510 19:30:02.676124  459056 logs.go:282] 0 containers: []
	W0510 19:30:02.676132  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:02.676138  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:02.676198  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:02.712753  459056 cri.go:89] found id: ""
	I0510 19:30:02.712794  459056 logs.go:282] 0 containers: []
	W0510 19:30:02.712806  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:02.712814  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:02.712889  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:02.750540  459056 cri.go:89] found id: ""
	I0510 19:30:02.750572  459056 logs.go:282] 0 containers: []
	W0510 19:30:02.750580  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:02.750588  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:02.750666  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:02.789337  459056 cri.go:89] found id: ""
	I0510 19:30:02.789372  459056 logs.go:282] 0 containers: []
	W0510 19:30:02.789386  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:02.789394  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:02.789471  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:02.827044  459056 cri.go:89] found id: ""
	I0510 19:30:02.827076  459056 logs.go:282] 0 containers: []
	W0510 19:30:02.827087  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:02.827094  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:02.827154  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:02.867202  459056 cri.go:89] found id: ""
	I0510 19:30:02.867251  459056 logs.go:282] 0 containers: []
	W0510 19:30:02.867264  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:02.867272  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:02.867336  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:02.906104  459056 cri.go:89] found id: ""
	I0510 19:30:02.906136  459056 logs.go:282] 0 containers: []
	W0510 19:30:02.906145  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:02.906155  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:02.906167  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:02.959451  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:02.959504  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:02.975037  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:02.975074  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:03.051037  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:03.051066  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:03.051083  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:03.132615  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:03.132663  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:05.677564  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:05.695683  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:05.695774  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:05.733222  459056 cri.go:89] found id: ""
	I0510 19:30:05.733253  459056 logs.go:282] 0 containers: []
	W0510 19:30:05.733266  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:05.733273  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:05.733343  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:05.775893  459056 cri.go:89] found id: ""
	I0510 19:30:05.775926  459056 logs.go:282] 0 containers: []
	W0510 19:30:05.775938  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:05.775946  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:05.776013  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:05.814170  459056 cri.go:89] found id: ""
	I0510 19:30:05.814201  459056 logs.go:282] 0 containers: []
	W0510 19:30:05.814209  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:05.814215  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:05.814271  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:05.865156  459056 cri.go:89] found id: ""
	I0510 19:30:05.865185  459056 logs.go:282] 0 containers: []
	W0510 19:30:05.865193  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:05.865200  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:05.865267  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:05.904409  459056 cri.go:89] found id: ""
	I0510 19:30:05.904440  459056 logs.go:282] 0 containers: []
	W0510 19:30:05.904449  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:05.904455  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:05.904516  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:05.948278  459056 cri.go:89] found id: ""
	I0510 19:30:05.948308  459056 logs.go:282] 0 containers: []
	W0510 19:30:05.948316  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:05.948322  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:05.948383  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:05.986379  459056 cri.go:89] found id: ""
	I0510 19:30:05.986415  459056 logs.go:282] 0 containers: []
	W0510 19:30:05.986426  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:05.986435  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:05.986502  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:06.030940  459056 cri.go:89] found id: ""
	I0510 19:30:06.030974  459056 logs.go:282] 0 containers: []
	W0510 19:30:06.030984  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:06.030994  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:06.031007  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:06.081923  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:06.081973  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:06.097288  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:06.097321  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:06.169428  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:06.169457  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:06.169471  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:06.247404  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:06.247457  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:08.791138  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:08.810447  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:08.810527  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:08.849947  459056 cri.go:89] found id: ""
	I0510 19:30:08.849983  459056 logs.go:282] 0 containers: []
	W0510 19:30:08.849996  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:08.850005  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:08.850079  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:08.889474  459056 cri.go:89] found id: ""
	I0510 19:30:08.889511  459056 logs.go:282] 0 containers: []
	W0510 19:30:08.889521  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:08.889530  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:08.889605  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:08.929364  459056 cri.go:89] found id: ""
	I0510 19:30:08.929402  459056 logs.go:282] 0 containers: []
	W0510 19:30:08.929414  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:08.929420  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:08.929481  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:08.970260  459056 cri.go:89] found id: ""
	I0510 19:30:08.970292  459056 logs.go:282] 0 containers: []
	W0510 19:30:08.970301  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:08.970312  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:08.970370  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:09.011080  459056 cri.go:89] found id: ""
	I0510 19:30:09.011114  459056 logs.go:282] 0 containers: []
	W0510 19:30:09.011123  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:09.011130  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:09.011192  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:09.050057  459056 cri.go:89] found id: ""
	I0510 19:30:09.050096  459056 logs.go:282] 0 containers: []
	W0510 19:30:09.050106  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:09.050112  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:09.050177  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:09.089408  459056 cri.go:89] found id: ""
	I0510 19:30:09.089454  459056 logs.go:282] 0 containers: []
	W0510 19:30:09.089467  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:09.089484  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:09.089559  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:09.127502  459056 cri.go:89] found id: ""
	I0510 19:30:09.127533  459056 logs.go:282] 0 containers: []
	W0510 19:30:09.127544  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:09.127555  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:09.127573  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:09.177856  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:09.177903  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:09.194009  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:09.194041  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:09.269803  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:09.269833  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:09.269851  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:09.350498  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:09.350562  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:11.895252  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:11.913748  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:11.913819  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:11.957943  459056 cri.go:89] found id: ""
	I0510 19:30:11.957974  459056 logs.go:282] 0 containers: []
	W0510 19:30:11.957982  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:11.957990  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:11.958059  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:11.999707  459056 cri.go:89] found id: ""
	I0510 19:30:11.999735  459056 logs.go:282] 0 containers: []
	W0510 19:30:11.999743  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:11.999750  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:11.999805  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:12.044866  459056 cri.go:89] found id: ""
	I0510 19:30:12.044905  459056 logs.go:282] 0 containers: []
	W0510 19:30:12.044914  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:12.044922  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:12.044980  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:12.083885  459056 cri.go:89] found id: ""
	I0510 19:30:12.083925  459056 logs.go:282] 0 containers: []
	W0510 19:30:12.083938  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:12.083946  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:12.084014  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:12.124186  459056 cri.go:89] found id: ""
	I0510 19:30:12.124223  459056 logs.go:282] 0 containers: []
	W0510 19:30:12.124232  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:12.124239  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:12.124296  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:12.163773  459056 cri.go:89] found id: ""
	I0510 19:30:12.163809  459056 logs.go:282] 0 containers: []
	W0510 19:30:12.163817  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:12.163824  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:12.163887  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:12.208245  459056 cri.go:89] found id: ""
	I0510 19:30:12.208285  459056 logs.go:282] 0 containers: []
	W0510 19:30:12.208297  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:12.208305  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:12.208378  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:12.248816  459056 cri.go:89] found id: ""
	I0510 19:30:12.248855  459056 logs.go:282] 0 containers: []
	W0510 19:30:12.248871  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:12.248885  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:12.248907  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:12.293098  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:12.293137  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:12.346119  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:12.346166  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:12.362174  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:12.362208  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:12.436485  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:12.436514  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:12.436527  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:15.021483  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:15.039908  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:15.039983  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:15.077291  459056 cri.go:89] found id: ""
	I0510 19:30:15.077323  459056 logs.go:282] 0 containers: []
	W0510 19:30:15.077335  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:15.077344  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:15.077417  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:15.119066  459056 cri.go:89] found id: ""
	I0510 19:30:15.119099  459056 logs.go:282] 0 containers: []
	W0510 19:30:15.119108  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:15.119114  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:15.119169  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:15.158927  459056 cri.go:89] found id: ""
	I0510 19:30:15.158957  459056 logs.go:282] 0 containers: []
	W0510 19:30:15.158968  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:15.158976  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:15.159052  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:15.199423  459056 cri.go:89] found id: ""
	I0510 19:30:15.199458  459056 logs.go:282] 0 containers: []
	W0510 19:30:15.199467  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:15.199474  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:15.199538  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:15.237695  459056 cri.go:89] found id: ""
	I0510 19:30:15.237734  459056 logs.go:282] 0 containers: []
	W0510 19:30:15.237744  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:15.237751  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:15.237822  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:15.280652  459056 cri.go:89] found id: ""
	I0510 19:30:15.280693  459056 logs.go:282] 0 containers: []
	W0510 19:30:15.280705  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:15.280721  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:15.280794  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:15.319730  459056 cri.go:89] found id: ""
	I0510 19:30:15.319767  459056 logs.go:282] 0 containers: []
	W0510 19:30:15.319780  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:15.319788  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:15.319861  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:15.361113  459056 cri.go:89] found id: ""
	I0510 19:30:15.361147  459056 logs.go:282] 0 containers: []
	W0510 19:30:15.361156  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:15.361165  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:15.361178  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:15.424953  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:15.425003  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:15.444155  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:15.444187  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:15.520040  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:15.520067  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:15.520080  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:15.595963  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:15.596013  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:18.142672  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:18.160293  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:18.160373  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:18.197867  459056 cri.go:89] found id: ""
	I0510 19:30:18.197911  459056 logs.go:282] 0 containers: []
	W0510 19:30:18.197920  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:18.197927  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:18.197985  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:18.236491  459056 cri.go:89] found id: ""
	I0510 19:30:18.236519  459056 logs.go:282] 0 containers: []
	W0510 19:30:18.236528  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:18.236535  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:18.236591  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:18.275316  459056 cri.go:89] found id: ""
	I0510 19:30:18.275355  459056 logs.go:282] 0 containers: []
	W0510 19:30:18.275368  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:18.275376  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:18.275447  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:18.314904  459056 cri.go:89] found id: ""
	I0510 19:30:18.314946  459056 logs.go:282] 0 containers: []
	W0510 19:30:18.314963  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:18.314972  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:18.315049  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:18.353877  459056 cri.go:89] found id: ""
	I0510 19:30:18.353906  459056 logs.go:282] 0 containers: []
	W0510 19:30:18.353924  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:18.353933  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:18.354019  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:18.391081  459056 cri.go:89] found id: ""
	I0510 19:30:18.391115  459056 logs.go:282] 0 containers: []
	W0510 19:30:18.391124  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:18.391131  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:18.391208  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:18.430112  459056 cri.go:89] found id: ""
	I0510 19:30:18.430151  459056 logs.go:282] 0 containers: []
	W0510 19:30:18.430165  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:18.430171  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:18.430241  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:18.467247  459056 cri.go:89] found id: ""
	I0510 19:30:18.467282  459056 logs.go:282] 0 containers: []
	W0510 19:30:18.467294  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:18.467307  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:18.467331  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:18.483013  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:18.483049  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:18.556404  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:18.556437  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:18.556457  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:18.634193  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:18.634242  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:18.677713  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:18.677752  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:21.230499  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:21.248397  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:21.248485  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:21.284922  459056 cri.go:89] found id: ""
	I0510 19:30:21.284961  459056 logs.go:282] 0 containers: []
	W0510 19:30:21.284974  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:21.284983  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:21.285062  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:21.323019  459056 cri.go:89] found id: ""
	I0510 19:30:21.323054  459056 logs.go:282] 0 containers: []
	W0510 19:30:21.323064  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:21.323071  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:21.323148  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:21.361809  459056 cri.go:89] found id: ""
	I0510 19:30:21.361838  459056 logs.go:282] 0 containers: []
	W0510 19:30:21.361846  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:21.361852  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:21.361930  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:21.399062  459056 cri.go:89] found id: ""
	I0510 19:30:21.399101  459056 logs.go:282] 0 containers: []
	W0510 19:30:21.399115  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:21.399124  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:21.399195  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:21.436027  459056 cri.go:89] found id: ""
	I0510 19:30:21.436061  459056 logs.go:282] 0 containers: []
	W0510 19:30:21.436071  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:21.436077  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:21.436143  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:21.481101  459056 cri.go:89] found id: ""
	I0510 19:30:21.481141  459056 logs.go:282] 0 containers: []
	W0510 19:30:21.481151  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:21.481158  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:21.481213  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:21.525918  459056 cri.go:89] found id: ""
	I0510 19:30:21.525949  459056 logs.go:282] 0 containers: []
	W0510 19:30:21.525958  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:21.525965  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:21.526051  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:21.566402  459056 cri.go:89] found id: ""
	I0510 19:30:21.566438  459056 logs.go:282] 0 containers: []
	W0510 19:30:21.566451  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:21.566466  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:21.566483  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:21.640295  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:21.640326  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:21.640344  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:21.723808  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:21.723860  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:21.787009  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:21.787053  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:21.846605  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:21.846653  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:24.365273  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:24.382257  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:24.382346  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:24.422109  459056 cri.go:89] found id: ""
	I0510 19:30:24.422145  459056 logs.go:282] 0 containers: []
	W0510 19:30:24.422154  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:24.422161  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:24.422223  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:24.461355  459056 cri.go:89] found id: ""
	I0510 19:30:24.461382  459056 logs.go:282] 0 containers: []
	W0510 19:30:24.461389  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:24.461395  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:24.461451  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:24.500168  459056 cri.go:89] found id: ""
	I0510 19:30:24.500203  459056 logs.go:282] 0 containers: []
	W0510 19:30:24.500214  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:24.500222  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:24.500293  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:24.535437  459056 cri.go:89] found id: ""
	I0510 19:30:24.535473  459056 logs.go:282] 0 containers: []
	W0510 19:30:24.535481  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:24.535487  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:24.535567  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:24.574226  459056 cri.go:89] found id: ""
	I0510 19:30:24.574262  459056 logs.go:282] 0 containers: []
	W0510 19:30:24.574274  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:24.574282  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:24.574353  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:24.611038  459056 cri.go:89] found id: ""
	I0510 19:30:24.611076  459056 logs.go:282] 0 containers: []
	W0510 19:30:24.611085  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:24.611094  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:24.611148  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:24.650255  459056 cri.go:89] found id: ""
	I0510 19:30:24.650291  459056 logs.go:282] 0 containers: []
	W0510 19:30:24.650303  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:24.650313  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:24.650382  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:24.688115  459056 cri.go:89] found id: ""
	I0510 19:30:24.688148  459056 logs.go:282] 0 containers: []
	W0510 19:30:24.688157  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:24.688166  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:24.688180  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:24.738142  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:24.738193  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:24.754027  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:24.754059  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:24.836221  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:24.836251  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:24.836270  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:24.911260  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:24.911306  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:27.453339  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:27.470837  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:27.470922  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:27.510141  459056 cri.go:89] found id: ""
	I0510 19:30:27.510171  459056 logs.go:282] 0 containers: []
	W0510 19:30:27.510180  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:27.510187  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:27.510245  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:27.560311  459056 cri.go:89] found id: ""
	I0510 19:30:27.560337  459056 logs.go:282] 0 containers: []
	W0510 19:30:27.560346  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:27.560352  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:27.560412  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:27.615618  459056 cri.go:89] found id: ""
	I0510 19:30:27.615648  459056 logs.go:282] 0 containers: []
	W0510 19:30:27.615658  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:27.615683  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:27.615745  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:27.663257  459056 cri.go:89] found id: ""
	I0510 19:30:27.663290  459056 logs.go:282] 0 containers: []
	W0510 19:30:27.663298  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:27.663305  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:27.663377  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:27.705815  459056 cri.go:89] found id: ""
	I0510 19:30:27.705856  459056 logs.go:282] 0 containers: []
	W0510 19:30:27.705864  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:27.705870  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:27.705932  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:27.744580  459056 cri.go:89] found id: ""
	I0510 19:30:27.744612  459056 logs.go:282] 0 containers: []
	W0510 19:30:27.744620  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:27.744637  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:27.744694  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:27.781041  459056 cri.go:89] found id: ""
	I0510 19:30:27.781070  459056 logs.go:282] 0 containers: []
	W0510 19:30:27.781078  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:27.781087  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:27.781145  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:27.818543  459056 cri.go:89] found id: ""
	I0510 19:30:27.818583  459056 logs.go:282] 0 containers: []
	W0510 19:30:27.818592  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:27.818603  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:27.818631  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:27.834004  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:27.834038  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:27.907944  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:27.907973  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:27.907991  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:27.988229  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:27.988276  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:28.032107  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:28.032141  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:30.581752  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:30.599095  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:30.599167  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:30.637772  459056 cri.go:89] found id: ""
	I0510 19:30:30.637804  459056 logs.go:282] 0 containers: []
	W0510 19:30:30.637815  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:30.637824  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:30.637894  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:30.674650  459056 cri.go:89] found id: ""
	I0510 19:30:30.674690  459056 logs.go:282] 0 containers: []
	W0510 19:30:30.674702  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:30.674709  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:30.674791  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:30.712335  459056 cri.go:89] found id: ""
	I0510 19:30:30.712370  459056 logs.go:282] 0 containers: []
	W0510 19:30:30.712379  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:30.712384  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:30.712457  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:30.749850  459056 cri.go:89] found id: ""
	I0510 19:30:30.749894  459056 logs.go:282] 0 containers: []
	W0510 19:30:30.749906  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:30.749914  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:30.750001  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:30.790937  459056 cri.go:89] found id: ""
	I0510 19:30:30.790976  459056 logs.go:282] 0 containers: []
	W0510 19:30:30.790985  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:30.790992  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:30.791048  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:30.830223  459056 cri.go:89] found id: ""
	I0510 19:30:30.830256  459056 logs.go:282] 0 containers: []
	W0510 19:30:30.830265  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:30.830271  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:30.830335  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:30.868658  459056 cri.go:89] found id: ""
	I0510 19:30:30.868685  459056 logs.go:282] 0 containers: []
	W0510 19:30:30.868693  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:30.868699  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:30.868755  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:30.908485  459056 cri.go:89] found id: ""
	I0510 19:30:30.908518  459056 logs.go:282] 0 containers: []
	W0510 19:30:30.908527  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:30.908537  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:30.908576  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:30.987890  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:30.987915  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:30.987930  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:31.066668  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:31.066724  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:31.114289  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:31.114322  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:31.168049  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:31.168101  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:33.685815  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:33.702996  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:33.703075  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:33.740679  459056 cri.go:89] found id: ""
	I0510 19:30:33.740710  459056 logs.go:282] 0 containers: []
	W0510 19:30:33.740718  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:33.740724  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:33.740789  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:33.778013  459056 cri.go:89] found id: ""
	I0510 19:30:33.778045  459056 logs.go:282] 0 containers: []
	W0510 19:30:33.778053  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:33.778059  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:33.778118  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:33.819601  459056 cri.go:89] found id: ""
	I0510 19:30:33.819634  459056 logs.go:282] 0 containers: []
	W0510 19:30:33.819643  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:33.819649  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:33.819719  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:33.858368  459056 cri.go:89] found id: ""
	I0510 19:30:33.858399  459056 logs.go:282] 0 containers: []
	W0510 19:30:33.858407  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:33.858414  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:33.858469  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:33.899175  459056 cri.go:89] found id: ""
	I0510 19:30:33.899210  459056 logs.go:282] 0 containers: []
	W0510 19:30:33.899219  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:33.899225  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:33.899297  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:33.938037  459056 cri.go:89] found id: ""
	I0510 19:30:33.938075  459056 logs.go:282] 0 containers: []
	W0510 19:30:33.938085  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:33.938092  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:33.938151  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:33.976364  459056 cri.go:89] found id: ""
	I0510 19:30:33.976398  459056 logs.go:282] 0 containers: []
	W0510 19:30:33.976408  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:33.976415  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:33.976474  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:34.019444  459056 cri.go:89] found id: ""
	I0510 19:30:34.019476  459056 logs.go:282] 0 containers: []
	W0510 19:30:34.019485  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:34.019496  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:34.019509  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:34.066863  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:34.066897  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:34.116346  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:34.116394  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:34.131809  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:34.131842  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:34.201228  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:34.201261  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:34.201278  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:36.784883  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:36.802185  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:36.802277  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:36.838342  459056 cri.go:89] found id: ""
	I0510 19:30:36.838382  459056 logs.go:282] 0 containers: []
	W0510 19:30:36.838395  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:36.838405  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:36.838484  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:36.875021  459056 cri.go:89] found id: ""
	I0510 19:30:36.875052  459056 logs.go:282] 0 containers: []
	W0510 19:30:36.875060  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:36.875066  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:36.875136  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:36.912550  459056 cri.go:89] found id: ""
	I0510 19:30:36.912579  459056 logs.go:282] 0 containers: []
	W0510 19:30:36.912589  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:36.912595  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:36.912672  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:36.953970  459056 cri.go:89] found id: ""
	I0510 19:30:36.954002  459056 logs.go:282] 0 containers: []
	W0510 19:30:36.954013  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:36.954021  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:36.954090  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:36.990198  459056 cri.go:89] found id: ""
	I0510 19:30:36.990227  459056 logs.go:282] 0 containers: []
	W0510 19:30:36.990236  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:36.990242  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:36.990315  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:37.026559  459056 cri.go:89] found id: ""
	I0510 19:30:37.026594  459056 logs.go:282] 0 containers: []
	W0510 19:30:37.026604  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:37.026612  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:37.026696  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:37.063080  459056 cri.go:89] found id: ""
	I0510 19:30:37.063112  459056 logs.go:282] 0 containers: []
	W0510 19:30:37.063120  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:37.063127  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:37.063181  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:37.099746  459056 cri.go:89] found id: ""
	I0510 19:30:37.099786  459056 logs.go:282] 0 containers: []
	W0510 19:30:37.099800  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:37.099814  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:37.099831  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:37.150884  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:37.150932  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:37.166536  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:37.166568  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:37.241013  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:37.241045  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:37.241062  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:37.319328  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:37.319370  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:39.863629  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:39.881255  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:39.881331  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:39.921099  459056 cri.go:89] found id: ""
	I0510 19:30:39.921128  459056 logs.go:282] 0 containers: []
	W0510 19:30:39.921136  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:39.921142  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:39.921208  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:39.958588  459056 cri.go:89] found id: ""
	I0510 19:30:39.958620  459056 logs.go:282] 0 containers: []
	W0510 19:30:39.958629  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:39.958634  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:39.958701  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:39.995129  459056 cri.go:89] found id: ""
	I0510 19:30:39.995160  459056 logs.go:282] 0 containers: []
	W0510 19:30:39.995168  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:39.995174  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:39.995230  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:40.031278  459056 cri.go:89] found id: ""
	I0510 19:30:40.031308  459056 logs.go:282] 0 containers: []
	W0510 19:30:40.031320  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:40.031328  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:40.031399  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:40.069662  459056 cri.go:89] found id: ""
	I0510 19:30:40.069694  459056 logs.go:282] 0 containers: []
	W0510 19:30:40.069703  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:40.069708  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:40.069769  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:40.106418  459056 cri.go:89] found id: ""
	I0510 19:30:40.106452  459056 logs.go:282] 0 containers: []
	W0510 19:30:40.106464  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:40.106474  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:40.106546  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:40.143694  459056 cri.go:89] found id: ""
	I0510 19:30:40.143728  459056 logs.go:282] 0 containers: []
	W0510 19:30:40.143737  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:40.143743  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:40.143812  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:40.178265  459056 cri.go:89] found id: ""
	I0510 19:30:40.178296  459056 logs.go:282] 0 containers: []
	W0510 19:30:40.178304  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:40.178314  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:40.178328  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:40.247907  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:40.247940  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:40.247959  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:40.321933  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:40.321985  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:40.368947  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:40.368991  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:40.419749  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:40.419791  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:42.936834  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:42.954258  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:42.954332  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:42.991570  459056 cri.go:89] found id: ""
	I0510 19:30:42.991603  459056 logs.go:282] 0 containers: []
	W0510 19:30:42.991611  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:42.991617  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:42.991685  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:43.029718  459056 cri.go:89] found id: ""
	I0510 19:30:43.029751  459056 logs.go:282] 0 containers: []
	W0510 19:30:43.029759  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:43.029766  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:43.029824  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:43.068297  459056 cri.go:89] found id: ""
	I0510 19:30:43.068328  459056 logs.go:282] 0 containers: []
	W0510 19:30:43.068335  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:43.068342  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:43.068405  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:43.109805  459056 cri.go:89] found id: ""
	I0510 19:30:43.109833  459056 logs.go:282] 0 containers: []
	W0510 19:30:43.109841  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:43.109847  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:43.109900  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:43.148109  459056 cri.go:89] found id: ""
	I0510 19:30:43.148141  459056 logs.go:282] 0 containers: []
	W0510 19:30:43.148149  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:43.148156  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:43.148224  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:43.185187  459056 cri.go:89] found id: ""
	I0510 19:30:43.185221  459056 logs.go:282] 0 containers: []
	W0510 19:30:43.185230  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:43.185239  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:43.185293  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:43.224447  459056 cri.go:89] found id: ""
	I0510 19:30:43.224476  459056 logs.go:282] 0 containers: []
	W0510 19:30:43.224485  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:43.224496  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:43.224552  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:43.268442  459056 cri.go:89] found id: ""
	I0510 19:30:43.268471  459056 logs.go:282] 0 containers: []
	W0510 19:30:43.268480  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:43.268489  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:43.268501  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:43.347249  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:43.347282  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:43.347307  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:43.427928  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:43.427975  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:43.473221  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:43.473258  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:43.522748  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:43.522796  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:46.040289  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:46.058969  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:46.059051  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:46.102709  459056 cri.go:89] found id: ""
	I0510 19:30:46.102757  459056 logs.go:282] 0 containers: []
	W0510 19:30:46.102775  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:46.102786  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:46.102848  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:46.146551  459056 cri.go:89] found id: ""
	I0510 19:30:46.146584  459056 logs.go:282] 0 containers: []
	W0510 19:30:46.146593  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:46.146599  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:46.146670  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:46.187445  459056 cri.go:89] found id: ""
	I0510 19:30:46.187484  459056 logs.go:282] 0 containers: []
	W0510 19:30:46.187498  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:46.187505  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:46.187575  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:46.224647  459056 cri.go:89] found id: ""
	I0510 19:30:46.224686  459056 logs.go:282] 0 containers: []
	W0510 19:30:46.224697  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:46.224706  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:46.224786  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:46.263513  459056 cri.go:89] found id: ""
	I0510 19:30:46.263545  459056 logs.go:282] 0 containers: []
	W0510 19:30:46.263554  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:46.263560  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:46.263639  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:46.300049  459056 cri.go:89] found id: ""
	I0510 19:30:46.300085  459056 logs.go:282] 0 containers: []
	W0510 19:30:46.300096  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:46.300104  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:46.300174  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:46.337107  459056 cri.go:89] found id: ""
	I0510 19:30:46.337139  459056 logs.go:282] 0 containers: []
	W0510 19:30:46.337150  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:46.337159  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:46.337219  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:46.373699  459056 cri.go:89] found id: ""
	I0510 19:30:46.373736  459056 logs.go:282] 0 containers: []
	W0510 19:30:46.373748  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:46.373761  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:46.373777  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:46.425713  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:46.425764  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:46.441565  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:46.441602  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:46.517861  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:46.517897  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:46.517918  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:46.601755  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:46.601807  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:49.147704  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:49.165325  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:49.165397  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:49.206272  459056 cri.go:89] found id: ""
	I0510 19:30:49.206309  459056 logs.go:282] 0 containers: []
	W0510 19:30:49.206318  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:49.206324  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:49.206385  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:49.241832  459056 cri.go:89] found id: ""
	I0510 19:30:49.241863  459056 logs.go:282] 0 containers: []
	W0510 19:30:49.241871  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:49.241878  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:49.241958  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:49.280474  459056 cri.go:89] found id: ""
	I0510 19:30:49.280505  459056 logs.go:282] 0 containers: []
	W0510 19:30:49.280514  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:49.280520  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:49.280577  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:49.317656  459056 cri.go:89] found id: ""
	I0510 19:30:49.317687  459056 logs.go:282] 0 containers: []
	W0510 19:30:49.317699  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:49.317718  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:49.317789  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:49.356059  459056 cri.go:89] found id: ""
	I0510 19:30:49.356094  459056 logs.go:282] 0 containers: []
	W0510 19:30:49.356102  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:49.356112  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:49.356169  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:49.396831  459056 cri.go:89] found id: ""
	I0510 19:30:49.396864  459056 logs.go:282] 0 containers: []
	W0510 19:30:49.396877  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:49.396885  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:49.396954  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:49.433301  459056 cri.go:89] found id: ""
	I0510 19:30:49.433328  459056 logs.go:282] 0 containers: []
	W0510 19:30:49.433336  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:49.433342  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:49.433416  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:49.470642  459056 cri.go:89] found id: ""
	I0510 19:30:49.470674  459056 logs.go:282] 0 containers: []
	W0510 19:30:49.470686  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:49.470698  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:49.470715  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:49.520867  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:49.520910  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:49.536370  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:49.536406  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:49.608860  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:49.608894  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:49.608913  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:49.687344  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:49.687395  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:52.231133  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:52.248456  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:52.248550  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:52.288902  459056 cri.go:89] found id: ""
	I0510 19:30:52.288960  459056 logs.go:282] 0 containers: []
	W0510 19:30:52.288973  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:52.288982  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:52.289062  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:52.326578  459056 cri.go:89] found id: ""
	I0510 19:30:52.326611  459056 logs.go:282] 0 containers: []
	W0510 19:30:52.326626  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:52.326634  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:52.326713  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:52.368627  459056 cri.go:89] found id: ""
	I0510 19:30:52.368657  459056 logs.go:282] 0 containers: []
	W0510 19:30:52.368666  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:52.368672  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:52.368754  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:52.406483  459056 cri.go:89] found id: ""
	I0510 19:30:52.406518  459056 logs.go:282] 0 containers: []
	W0510 19:30:52.406526  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:52.406533  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:52.406599  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:52.445770  459056 cri.go:89] found id: ""
	I0510 19:30:52.445805  459056 logs.go:282] 0 containers: []
	W0510 19:30:52.445816  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:52.445826  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:52.445898  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:52.484279  459056 cri.go:89] found id: ""
	I0510 19:30:52.484315  459056 logs.go:282] 0 containers: []
	W0510 19:30:52.484325  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:52.484332  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:52.484395  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:52.523564  459056 cri.go:89] found id: ""
	I0510 19:30:52.523601  459056 logs.go:282] 0 containers: []
	W0510 19:30:52.523628  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:52.523634  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:52.523701  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:52.566712  459056 cri.go:89] found id: ""
	I0510 19:30:52.566747  459056 logs.go:282] 0 containers: []
	W0510 19:30:52.566756  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:52.566768  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:52.566784  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:52.618210  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:52.618263  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:52.635481  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:52.635518  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:52.710370  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:52.710415  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:52.710435  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:52.789902  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:52.789960  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:55.334697  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:55.351738  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:55.351815  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:55.387464  459056 cri.go:89] found id: ""
	I0510 19:30:55.387493  459056 logs.go:282] 0 containers: []
	W0510 19:30:55.387503  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:55.387512  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:55.387578  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:55.424565  459056 cri.go:89] found id: ""
	I0510 19:30:55.424597  459056 logs.go:282] 0 containers: []
	W0510 19:30:55.424608  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:55.424617  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:55.424690  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:55.461558  459056 cri.go:89] found id: ""
	I0510 19:30:55.461597  459056 logs.go:282] 0 containers: []
	W0510 19:30:55.461608  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:55.461616  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:55.461689  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:55.500713  459056 cri.go:89] found id: ""
	I0510 19:30:55.500742  459056 logs.go:282] 0 containers: []
	W0510 19:30:55.500756  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:55.500763  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:55.500826  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:55.536104  459056 cri.go:89] found id: ""
	I0510 19:30:55.536132  459056 logs.go:282] 0 containers: []
	W0510 19:30:55.536141  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:55.536147  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:55.536206  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:55.571895  459056 cri.go:89] found id: ""
	I0510 19:30:55.571924  459056 logs.go:282] 0 containers: []
	W0510 19:30:55.571932  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:55.571938  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:55.571996  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:55.610794  459056 cri.go:89] found id: ""
	I0510 19:30:55.610822  459056 logs.go:282] 0 containers: []
	W0510 19:30:55.610831  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:55.610837  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:55.610904  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:55.647514  459056 cri.go:89] found id: ""
	I0510 19:30:55.647544  459056 logs.go:282] 0 containers: []
	W0510 19:30:55.647554  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:55.647563  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:55.647578  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:55.697745  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:55.697788  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:55.714126  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:55.714161  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:55.786711  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:55.786735  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:55.786749  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:55.863002  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:55.863049  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:58.428393  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:58.446138  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:58.446216  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:58.482821  459056 cri.go:89] found id: ""
	I0510 19:30:58.482856  459056 logs.go:282] 0 containers: []
	W0510 19:30:58.482872  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:58.482880  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:58.482939  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:58.524325  459056 cri.go:89] found id: ""
	I0510 19:30:58.524358  459056 logs.go:282] 0 containers: []
	W0510 19:30:58.524369  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:58.524377  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:58.524433  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:58.564327  459056 cri.go:89] found id: ""
	I0510 19:30:58.564366  459056 logs.go:282] 0 containers: []
	W0510 19:30:58.564377  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:58.564383  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:58.564439  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:58.602937  459056 cri.go:89] found id: ""
	I0510 19:30:58.602966  459056 logs.go:282] 0 containers: []
	W0510 19:30:58.602974  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:58.602981  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:58.603038  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:58.639820  459056 cri.go:89] found id: ""
	I0510 19:30:58.639852  459056 logs.go:282] 0 containers: []
	W0510 19:30:58.639863  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:58.639871  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:58.639963  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:58.676466  459056 cri.go:89] found id: ""
	I0510 19:30:58.676503  459056 logs.go:282] 0 containers: []
	W0510 19:30:58.676515  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:58.676524  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:58.676593  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:58.712669  459056 cri.go:89] found id: ""
	I0510 19:30:58.712706  459056 logs.go:282] 0 containers: []
	W0510 19:30:58.712715  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:58.712721  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:58.712797  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:58.748436  459056 cri.go:89] found id: ""
	I0510 19:30:58.748474  459056 logs.go:282] 0 containers: []
	W0510 19:30:58.748485  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:58.748496  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:58.748513  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:58.801263  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:58.801311  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:58.816908  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:58.816945  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:58.890881  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:58.890912  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:58.890932  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:58.969061  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:58.969113  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:01.513933  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:01.531492  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:01.531565  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:01.568296  459056 cri.go:89] found id: ""
	I0510 19:31:01.568324  459056 logs.go:282] 0 containers: []
	W0510 19:31:01.568333  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:01.568340  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:01.568396  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:01.610372  459056 cri.go:89] found id: ""
	I0510 19:31:01.610406  459056 logs.go:282] 0 containers: []
	W0510 19:31:01.610415  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:01.610421  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:01.610485  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:01.648652  459056 cri.go:89] found id: ""
	I0510 19:31:01.648682  459056 logs.go:282] 0 containers: []
	W0510 19:31:01.648690  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:01.648696  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:01.648751  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:01.686551  459056 cri.go:89] found id: ""
	I0510 19:31:01.686583  459056 logs.go:282] 0 containers: []
	W0510 19:31:01.686595  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:01.686604  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:01.686694  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:01.724202  459056 cri.go:89] found id: ""
	I0510 19:31:01.724243  459056 logs.go:282] 0 containers: []
	W0510 19:31:01.724255  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:01.724261  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:01.724337  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:01.763500  459056 cri.go:89] found id: ""
	I0510 19:31:01.763534  459056 logs.go:282] 0 containers: []
	W0510 19:31:01.763544  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:01.763550  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:01.763629  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:01.808280  459056 cri.go:89] found id: ""
	I0510 19:31:01.808312  459056 logs.go:282] 0 containers: []
	W0510 19:31:01.808324  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:01.808332  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:01.808403  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:01.843980  459056 cri.go:89] found id: ""
	I0510 19:31:01.844018  459056 logs.go:282] 0 containers: []
	W0510 19:31:01.844031  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:01.844044  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:01.844061  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:01.907482  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:01.907521  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:01.922645  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:01.922683  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:01.999977  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:02.000009  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:02.000031  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:02.078872  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:02.078920  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:04.624201  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:04.641739  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:04.641818  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:04.680796  459056 cri.go:89] found id: ""
	I0510 19:31:04.680825  459056 logs.go:282] 0 containers: []
	W0510 19:31:04.680833  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:04.680839  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:04.680893  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:04.718840  459056 cri.go:89] found id: ""
	I0510 19:31:04.718867  459056 logs.go:282] 0 containers: []
	W0510 19:31:04.718874  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:04.718880  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:04.718943  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:04.753687  459056 cri.go:89] found id: ""
	I0510 19:31:04.753726  459056 logs.go:282] 0 containers: []
	W0510 19:31:04.753737  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:04.753745  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:04.753815  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:04.790863  459056 cri.go:89] found id: ""
	I0510 19:31:04.790893  459056 logs.go:282] 0 containers: []
	W0510 19:31:04.790903  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:04.790910  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:04.790969  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:04.828293  459056 cri.go:89] found id: ""
	I0510 19:31:04.828321  459056 logs.go:282] 0 containers: []
	W0510 19:31:04.828329  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:04.828335  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:04.828400  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:04.865914  459056 cri.go:89] found id: ""
	I0510 19:31:04.865955  459056 logs.go:282] 0 containers: []
	W0510 19:31:04.865964  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:04.865970  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:04.866025  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:04.902834  459056 cri.go:89] found id: ""
	I0510 19:31:04.902866  459056 logs.go:282] 0 containers: []
	W0510 19:31:04.902879  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:04.902888  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:04.902960  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:04.939660  459056 cri.go:89] found id: ""
	I0510 19:31:04.939694  459056 logs.go:282] 0 containers: []
	W0510 19:31:04.939702  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:04.939711  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:04.939729  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:04.954569  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:04.954608  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:05.026998  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:05.027024  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:05.027041  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:05.111468  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:05.111520  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:05.155909  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:05.155953  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:07.709153  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:07.726572  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:07.726671  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:07.766663  459056 cri.go:89] found id: ""
	I0510 19:31:07.766691  459056 logs.go:282] 0 containers: []
	W0510 19:31:07.766703  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:07.766712  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:07.766909  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:07.806853  459056 cri.go:89] found id: ""
	I0510 19:31:07.806902  459056 logs.go:282] 0 containers: []
	W0510 19:31:07.806911  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:07.806917  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:07.806985  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:07.845188  459056 cri.go:89] found id: ""
	I0510 19:31:07.845218  459056 logs.go:282] 0 containers: []
	W0510 19:31:07.845227  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:07.845233  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:07.845291  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:07.884790  459056 cri.go:89] found id: ""
	I0510 19:31:07.884827  459056 logs.go:282] 0 containers: []
	W0510 19:31:07.884840  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:07.884847  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:07.884919  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:07.924161  459056 cri.go:89] found id: ""
	I0510 19:31:07.924195  459056 logs.go:282] 0 containers: []
	W0510 19:31:07.924206  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:07.924222  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:07.924288  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:07.962697  459056 cri.go:89] found id: ""
	I0510 19:31:07.962724  459056 logs.go:282] 0 containers: []
	W0510 19:31:07.962735  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:07.962744  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:07.962840  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:08.001266  459056 cri.go:89] found id: ""
	I0510 19:31:08.001306  459056 logs.go:282] 0 containers: []
	W0510 19:31:08.001318  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:08.001326  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:08.001418  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:08.040211  459056 cri.go:89] found id: ""
	I0510 19:31:08.040238  459056 logs.go:282] 0 containers: []
	W0510 19:31:08.040247  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:08.040255  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:08.040272  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:08.114738  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:08.114784  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:08.114802  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:08.188677  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:08.188725  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:08.232875  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:08.232908  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:08.293039  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:08.293095  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:10.811640  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:10.828942  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:10.829017  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:10.866960  459056 cri.go:89] found id: ""
	I0510 19:31:10.866993  459056 logs.go:282] 0 containers: []
	W0510 19:31:10.867003  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:10.867009  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:10.867066  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:10.906391  459056 cri.go:89] found id: ""
	I0510 19:31:10.906421  459056 logs.go:282] 0 containers: []
	W0510 19:31:10.906430  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:10.906436  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:10.906503  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:10.947062  459056 cri.go:89] found id: ""
	I0510 19:31:10.947091  459056 logs.go:282] 0 containers: []
	W0510 19:31:10.947100  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:10.947106  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:10.947172  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:10.984506  459056 cri.go:89] found id: ""
	I0510 19:31:10.984535  459056 logs.go:282] 0 containers: []
	W0510 19:31:10.984543  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:10.984549  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:10.984613  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:11.022676  459056 cri.go:89] found id: ""
	I0510 19:31:11.022715  459056 logs.go:282] 0 containers: []
	W0510 19:31:11.022724  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:11.022730  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:11.022805  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:11.067215  459056 cri.go:89] found id: ""
	I0510 19:31:11.067260  459056 logs.go:282] 0 containers: []
	W0510 19:31:11.067273  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:11.067282  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:11.067344  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:11.106883  459056 cri.go:89] found id: ""
	I0510 19:31:11.106912  459056 logs.go:282] 0 containers: []
	W0510 19:31:11.106920  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:11.106926  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:11.106984  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:11.148375  459056 cri.go:89] found id: ""
	I0510 19:31:11.148408  459056 logs.go:282] 0 containers: []
	W0510 19:31:11.148416  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:11.148426  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:11.148441  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:11.199507  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:11.199555  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:11.215477  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:11.215509  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:11.285250  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:11.285278  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:11.285292  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:11.365666  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:11.365724  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:13.914500  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:13.931769  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:13.931843  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:13.971450  459056 cri.go:89] found id: ""
	I0510 19:31:13.971481  459056 logs.go:282] 0 containers: []
	W0510 19:31:13.971491  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:13.971503  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:13.971585  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:14.016556  459056 cri.go:89] found id: ""
	I0510 19:31:14.016603  459056 logs.go:282] 0 containers: []
	W0510 19:31:14.016615  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:14.016624  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:14.016717  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:14.067360  459056 cri.go:89] found id: ""
	I0510 19:31:14.067395  459056 logs.go:282] 0 containers: []
	W0510 19:31:14.067406  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:14.067415  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:14.067490  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:14.115508  459056 cri.go:89] found id: ""
	I0510 19:31:14.115547  459056 logs.go:282] 0 containers: []
	W0510 19:31:14.115559  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:14.115566  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:14.115653  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:14.162589  459056 cri.go:89] found id: ""
	I0510 19:31:14.162620  459056 logs.go:282] 0 containers: []
	W0510 19:31:14.162629  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:14.162635  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:14.162720  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:14.203802  459056 cri.go:89] found id: ""
	I0510 19:31:14.203842  459056 logs.go:282] 0 containers: []
	W0510 19:31:14.203853  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:14.203861  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:14.203927  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:14.242404  459056 cri.go:89] found id: ""
	I0510 19:31:14.242440  459056 logs.go:282] 0 containers: []
	W0510 19:31:14.242449  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:14.242455  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:14.242526  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:14.279788  459056 cri.go:89] found id: ""
	I0510 19:31:14.279820  459056 logs.go:282] 0 containers: []
	W0510 19:31:14.279831  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:14.279843  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:14.279861  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:14.295706  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:14.295741  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:14.369637  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:14.369665  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:14.369684  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:14.445062  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:14.445113  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:14.488659  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:14.488692  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:17.042803  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:17.060263  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:17.060348  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:17.098561  459056 cri.go:89] found id: ""
	I0510 19:31:17.098588  459056 logs.go:282] 0 containers: []
	W0510 19:31:17.098597  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:17.098602  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:17.098666  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:17.136124  459056 cri.go:89] found id: ""
	I0510 19:31:17.136155  459056 logs.go:282] 0 containers: []
	W0510 19:31:17.136163  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:17.136169  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:17.136226  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:17.174746  459056 cri.go:89] found id: ""
	I0510 19:31:17.174773  459056 logs.go:282] 0 containers: []
	W0510 19:31:17.174781  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:17.174788  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:17.174853  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:17.211764  459056 cri.go:89] found id: ""
	I0510 19:31:17.211802  459056 logs.go:282] 0 containers: []
	W0510 19:31:17.211813  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:17.211822  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:17.211893  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:17.250173  459056 cri.go:89] found id: ""
	I0510 19:31:17.250220  459056 logs.go:282] 0 containers: []
	W0510 19:31:17.250231  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:17.250240  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:17.250307  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:17.288067  459056 cri.go:89] found id: ""
	I0510 19:31:17.288098  459056 logs.go:282] 0 containers: []
	W0510 19:31:17.288106  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:17.288113  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:17.288167  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:17.332174  459056 cri.go:89] found id: ""
	I0510 19:31:17.332201  459056 logs.go:282] 0 containers: []
	W0510 19:31:17.332210  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:17.332215  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:17.332279  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:17.368361  459056 cri.go:89] found id: ""
	I0510 19:31:17.368393  459056 logs.go:282] 0 containers: []
	W0510 19:31:17.368401  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:17.368414  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:17.368431  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:17.419140  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:17.419188  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:17.435060  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:17.435092  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:17.503946  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:17.503971  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:17.503985  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:17.577584  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:17.577636  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:20.122561  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:20.140245  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:20.140318  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:20.176963  459056 cri.go:89] found id: ""
	I0510 19:31:20.176997  459056 logs.go:282] 0 containers: []
	W0510 19:31:20.177006  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:20.177014  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:20.177082  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:20.214648  459056 cri.go:89] found id: ""
	I0510 19:31:20.214686  459056 logs.go:282] 0 containers: []
	W0510 19:31:20.214694  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:20.214700  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:20.214756  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:20.252572  459056 cri.go:89] found id: ""
	I0510 19:31:20.252603  459056 logs.go:282] 0 containers: []
	W0510 19:31:20.252610  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:20.252616  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:20.252690  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:20.292626  459056 cri.go:89] found id: ""
	I0510 19:31:20.292658  459056 logs.go:282] 0 containers: []
	W0510 19:31:20.292667  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:20.292673  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:20.292731  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:20.331394  459056 cri.go:89] found id: ""
	I0510 19:31:20.331426  459056 logs.go:282] 0 containers: []
	W0510 19:31:20.331433  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:20.331440  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:20.331493  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:20.369499  459056 cri.go:89] found id: ""
	I0510 19:31:20.369526  459056 logs.go:282] 0 containers: []
	W0510 19:31:20.369534  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:20.369541  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:20.369598  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:20.409063  459056 cri.go:89] found id: ""
	I0510 19:31:20.409101  459056 logs.go:282] 0 containers: []
	W0510 19:31:20.409119  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:20.409129  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:20.409202  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:20.448127  459056 cri.go:89] found id: ""
	I0510 19:31:20.448165  459056 logs.go:282] 0 containers: []
	W0510 19:31:20.448176  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:20.448192  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:20.448217  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:20.529717  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:20.529761  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:20.572287  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:20.572324  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:20.622908  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:20.622953  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:20.638966  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:20.639001  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:20.710197  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:23.211978  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:23.228993  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:23.229066  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:23.266521  459056 cri.go:89] found id: ""
	I0510 19:31:23.266554  459056 logs.go:282] 0 containers: []
	W0510 19:31:23.266563  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:23.266570  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:23.266624  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:23.305315  459056 cri.go:89] found id: ""
	I0510 19:31:23.305348  459056 logs.go:282] 0 containers: []
	W0510 19:31:23.305362  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:23.305371  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:23.305428  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:23.353734  459056 cri.go:89] found id: ""
	I0510 19:31:23.353764  459056 logs.go:282] 0 containers: []
	W0510 19:31:23.353773  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:23.353779  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:23.353836  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:23.392351  459056 cri.go:89] found id: ""
	I0510 19:31:23.392389  459056 logs.go:282] 0 containers: []
	W0510 19:31:23.392400  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:23.392408  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:23.392481  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:23.432302  459056 cri.go:89] found id: ""
	I0510 19:31:23.432338  459056 logs.go:282] 0 containers: []
	W0510 19:31:23.432349  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:23.432357  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:23.432423  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:23.470143  459056 cri.go:89] found id: ""
	I0510 19:31:23.470171  459056 logs.go:282] 0 containers: []
	W0510 19:31:23.470178  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:23.470184  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:23.470240  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:23.510123  459056 cri.go:89] found id: ""
	I0510 19:31:23.510151  459056 logs.go:282] 0 containers: []
	W0510 19:31:23.510158  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:23.510164  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:23.510218  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:23.548111  459056 cri.go:89] found id: ""
	I0510 19:31:23.548146  459056 logs.go:282] 0 containers: []
	W0510 19:31:23.548155  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:23.548165  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:23.548177  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:23.592214  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:23.592252  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:23.644384  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:23.644431  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:23.660004  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:23.660050  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:23.737601  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:23.737630  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:23.737646  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:26.318790  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:26.335345  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:26.335418  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:26.374890  459056 cri.go:89] found id: ""
	I0510 19:31:26.374925  459056 logs.go:282] 0 containers: []
	W0510 19:31:26.374939  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:26.374949  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:26.375022  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:26.416223  459056 cri.go:89] found id: ""
	I0510 19:31:26.416256  459056 logs.go:282] 0 containers: []
	W0510 19:31:26.416269  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:26.416279  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:26.416360  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:26.455431  459056 cri.go:89] found id: ""
	I0510 19:31:26.455472  459056 logs.go:282] 0 containers: []
	W0510 19:31:26.455485  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:26.455493  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:26.455563  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:26.493542  459056 cri.go:89] found id: ""
	I0510 19:31:26.493569  459056 logs.go:282] 0 containers: []
	W0510 19:31:26.493579  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:26.493588  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:26.493657  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:26.536613  459056 cri.go:89] found id: ""
	I0510 19:31:26.536642  459056 logs.go:282] 0 containers: []
	W0510 19:31:26.536651  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:26.536657  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:26.536742  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:26.574555  459056 cri.go:89] found id: ""
	I0510 19:31:26.574589  459056 logs.go:282] 0 containers: []
	W0510 19:31:26.574601  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:26.574610  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:26.574686  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:26.615726  459056 cri.go:89] found id: ""
	I0510 19:31:26.615767  459056 logs.go:282] 0 containers: []
	W0510 19:31:26.615779  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:26.615794  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:26.616130  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:26.658332  459056 cri.go:89] found id: ""
	I0510 19:31:26.658364  459056 logs.go:282] 0 containers: []
	W0510 19:31:26.658373  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:26.658382  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:26.658397  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:26.714050  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:26.714103  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:26.729247  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:26.729283  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:26.802056  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:26.802098  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:26.802117  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:26.880723  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:26.880777  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:29.424963  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:29.442400  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:29.442471  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:29.480974  459056 cri.go:89] found id: ""
	I0510 19:31:29.481014  459056 logs.go:282] 0 containers: []
	W0510 19:31:29.481025  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:29.481032  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:29.481103  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:29.517132  459056 cri.go:89] found id: ""
	I0510 19:31:29.517178  459056 logs.go:282] 0 containers: []
	W0510 19:31:29.517190  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:29.517199  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:29.517271  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:29.555573  459056 cri.go:89] found id: ""
	I0510 19:31:29.555610  459056 logs.go:282] 0 containers: []
	W0510 19:31:29.555621  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:29.555629  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:29.555706  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:29.591136  459056 cri.go:89] found id: ""
	I0510 19:31:29.591168  459056 logs.go:282] 0 containers: []
	W0510 19:31:29.591175  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:29.591181  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:29.591249  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:29.629174  459056 cri.go:89] found id: ""
	I0510 19:31:29.629205  459056 logs.go:282] 0 containers: []
	W0510 19:31:29.629214  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:29.629220  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:29.629285  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:29.666035  459056 cri.go:89] found id: ""
	I0510 19:31:29.666067  459056 logs.go:282] 0 containers: []
	W0510 19:31:29.666075  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:29.666081  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:29.666140  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:29.705842  459056 cri.go:89] found id: ""
	I0510 19:31:29.705872  459056 logs.go:282] 0 containers: []
	W0510 19:31:29.705880  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:29.705886  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:29.705964  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:29.743559  459056 cri.go:89] found id: ""
	I0510 19:31:29.743592  459056 logs.go:282] 0 containers: []
	W0510 19:31:29.743600  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:29.743623  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:29.743637  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:29.792453  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:29.792499  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:29.807725  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:29.807765  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:29.881784  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:29.881812  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:29.881825  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:29.954965  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:29.955014  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:32.502586  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:32.520169  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:32.520239  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:32.557308  459056 cri.go:89] found id: ""
	I0510 19:31:32.557342  459056 logs.go:282] 0 containers: []
	W0510 19:31:32.557350  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:32.557356  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:32.557411  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:32.595792  459056 cri.go:89] found id: ""
	I0510 19:31:32.595822  459056 logs.go:282] 0 containers: []
	W0510 19:31:32.595830  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:32.595835  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:32.595891  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:32.634389  459056 cri.go:89] found id: ""
	I0510 19:31:32.634429  459056 logs.go:282] 0 containers: []
	W0510 19:31:32.634437  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:32.634443  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:32.634517  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:32.675925  459056 cri.go:89] found id: ""
	I0510 19:31:32.675957  459056 logs.go:282] 0 containers: []
	W0510 19:31:32.675966  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:32.675973  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:32.676027  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:32.712730  459056 cri.go:89] found id: ""
	I0510 19:31:32.712767  459056 logs.go:282] 0 containers: []
	W0510 19:31:32.712776  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:32.712782  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:32.712843  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:32.749733  459056 cri.go:89] found id: ""
	I0510 19:31:32.749765  459056 logs.go:282] 0 containers: []
	W0510 19:31:32.749774  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:32.749781  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:32.749841  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:32.789481  459056 cri.go:89] found id: ""
	I0510 19:31:32.789513  459056 logs.go:282] 0 containers: []
	W0510 19:31:32.789521  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:32.789527  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:32.789586  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:32.828742  459056 cri.go:89] found id: ""
	I0510 19:31:32.828779  459056 logs.go:282] 0 containers: []
	W0510 19:31:32.828788  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:32.828798  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:32.828822  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:32.843753  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:32.843787  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:32.912953  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:32.912982  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:32.912995  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:32.989726  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:32.989770  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:33.040906  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:33.040943  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:35.593878  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:35.612402  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:35.612506  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:35.651532  459056 cri.go:89] found id: ""
	I0510 19:31:35.651562  459056 logs.go:282] 0 containers: []
	W0510 19:31:35.651571  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:35.651579  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:35.651671  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:35.689499  459056 cri.go:89] found id: ""
	I0510 19:31:35.689530  459056 logs.go:282] 0 containers: []
	W0510 19:31:35.689539  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:35.689546  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:35.689611  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:35.729195  459056 cri.go:89] found id: ""
	I0510 19:31:35.729230  459056 logs.go:282] 0 containers: []
	W0510 19:31:35.729239  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:35.729245  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:35.729314  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:35.767099  459056 cri.go:89] found id: ""
	I0510 19:31:35.767133  459056 logs.go:282] 0 containers: []
	W0510 19:31:35.767146  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:35.767151  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:35.767208  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:35.808130  459056 cri.go:89] found id: ""
	I0510 19:31:35.808166  459056 logs.go:282] 0 containers: []
	W0510 19:31:35.808179  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:35.808187  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:35.808261  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:35.845791  459056 cri.go:89] found id: ""
	I0510 19:31:35.845824  459056 logs.go:282] 0 containers: []
	W0510 19:31:35.845834  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:35.845841  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:35.846005  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:35.884049  459056 cri.go:89] found id: ""
	I0510 19:31:35.884083  459056 logs.go:282] 0 containers: []
	W0510 19:31:35.884093  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:35.884101  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:35.884182  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:35.921358  459056 cri.go:89] found id: ""
	I0510 19:31:35.921405  459056 logs.go:282] 0 containers: []
	W0510 19:31:35.921438  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:35.921454  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:35.921471  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:35.975819  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:35.975866  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:35.991683  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:35.991719  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:36.062576  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:36.062609  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:36.062692  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:36.144124  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:36.144171  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:38.688627  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:38.706961  459056 kubeadm.go:593] duration metric: took 4m1.80853031s to restartPrimaryControlPlane
	W0510 19:31:38.707088  459056 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0510 19:31:38.707129  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0510 19:31:42.433199  459056 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.726037031s)
	I0510 19:31:42.433304  459056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0510 19:31:42.450520  459056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0510 19:31:42.464170  459056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0510 19:31:42.478440  459056 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0510 19:31:42.478465  459056 kubeadm.go:157] found existing configuration files:
	
	I0510 19:31:42.478527  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0510 19:31:42.490756  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0510 19:31:42.490825  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0510 19:31:42.503476  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0510 19:31:42.516078  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0510 19:31:42.516162  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0510 19:31:42.529093  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0510 19:31:42.541784  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0510 19:31:42.541857  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0510 19:31:42.554154  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0510 19:31:42.566298  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0510 19:31:42.566366  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0510 19:31:42.579144  459056 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0510 19:31:42.808604  459056 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0510 19:33:39.237462  459056 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0510 19:33:39.237653  459056 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0510 19:33:39.240214  459056 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0510 19:33:39.240284  459056 kubeadm.go:310] [preflight] Running pre-flight checks
	I0510 19:33:39.240378  459056 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0510 19:33:39.240505  459056 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0510 19:33:39.240669  459056 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0510 19:33:39.240726  459056 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0510 19:33:39.242836  459056 out.go:235]   - Generating certificates and keys ...
	I0510 19:33:39.242931  459056 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0510 19:33:39.243010  459056 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0510 19:33:39.243103  459056 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0510 19:33:39.243180  459056 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0510 19:33:39.243286  459056 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0510 19:33:39.243366  459056 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0510 19:33:39.243440  459056 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0510 19:33:39.243544  459056 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0510 19:33:39.243662  459056 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0510 19:33:39.243769  459056 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0510 19:33:39.243830  459056 kubeadm.go:310] [certs] Using the existing "sa" key
	I0510 19:33:39.243905  459056 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0510 19:33:39.243972  459056 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0510 19:33:39.244018  459056 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0510 19:33:39.244072  459056 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0510 19:33:39.244132  459056 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0510 19:33:39.244227  459056 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0510 19:33:39.244322  459056 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0510 19:33:39.244375  459056 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0510 19:33:39.244459  459056 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0510 19:33:39.246586  459056 out.go:235]   - Booting up control plane ...
	I0510 19:33:39.246698  459056 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0510 19:33:39.246800  459056 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0510 19:33:39.246872  459056 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0510 19:33:39.246943  459056 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0510 19:33:39.247151  459056 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0510 19:33:39.247198  459056 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0510 19:33:39.247270  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:33:39.247423  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:33:39.247478  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:33:39.247671  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:33:39.247748  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:33:39.247894  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:33:39.247981  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:33:39.248179  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:33:39.248247  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:33:39.248415  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:33:39.248423  459056 kubeadm.go:310] 
	I0510 19:33:39.248461  459056 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0510 19:33:39.248497  459056 kubeadm.go:310] 		timed out waiting for the condition
	I0510 19:33:39.248507  459056 kubeadm.go:310] 
	I0510 19:33:39.248540  459056 kubeadm.go:310] 	This error is likely caused by:
	I0510 19:33:39.248570  459056 kubeadm.go:310] 		- The kubelet is not running
	I0510 19:33:39.248664  459056 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0510 19:33:39.248671  459056 kubeadm.go:310] 
	I0510 19:33:39.248767  459056 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0510 19:33:39.248803  459056 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0510 19:33:39.248832  459056 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0510 19:33:39.248839  459056 kubeadm.go:310] 
	I0510 19:33:39.248927  459056 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0510 19:33:39.249007  459056 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0510 19:33:39.249014  459056 kubeadm.go:310] 
	I0510 19:33:39.249164  459056 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0510 19:33:39.249288  459056 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0510 19:33:39.249351  459056 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0510 19:33:39.249408  459056 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0510 19:33:39.249484  459056 kubeadm.go:310] 
	W0510 19:33:39.249624  459056 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0510 19:33:39.249703  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0510 19:33:39.710770  459056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0510 19:33:39.729461  459056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0510 19:33:39.741531  459056 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0510 19:33:39.741556  459056 kubeadm.go:157] found existing configuration files:
	
	I0510 19:33:39.741617  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0510 19:33:39.752271  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0510 19:33:39.752339  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0510 19:33:39.764450  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0510 19:33:39.775142  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0510 19:33:39.775203  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0510 19:33:39.787008  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0510 19:33:39.798070  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0510 19:33:39.798143  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0510 19:33:39.809980  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0510 19:33:39.821862  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0510 19:33:39.821930  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0510 19:33:39.833890  459056 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0510 19:33:40.070673  459056 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0510 19:35:36.029186  459056 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0510 19:35:36.029314  459056 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0510 19:35:36.032027  459056 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0510 19:35:36.032078  459056 kubeadm.go:310] [preflight] Running pre-flight checks
	I0510 19:35:36.032177  459056 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0510 19:35:36.032280  459056 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0510 19:35:36.032361  459056 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0510 19:35:36.032446  459056 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0510 19:35:36.034371  459056 out.go:235]   - Generating certificates and keys ...
	I0510 19:35:36.034447  459056 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0510 19:35:36.034498  459056 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0510 19:35:36.034563  459056 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0510 19:35:36.034612  459056 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0510 19:35:36.034675  459056 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0510 19:35:36.034778  459056 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0510 19:35:36.034874  459056 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0510 19:35:36.034977  459056 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0510 19:35:36.035054  459056 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0510 19:35:36.035126  459056 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0510 19:35:36.035158  459056 kubeadm.go:310] [certs] Using the existing "sa" key
	I0510 19:35:36.035206  459056 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0510 19:35:36.035286  459056 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0510 19:35:36.035370  459056 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0510 19:35:36.035434  459056 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0510 19:35:36.035501  459056 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0510 19:35:36.035658  459056 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0510 19:35:36.035738  459056 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0510 19:35:36.035795  459056 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0510 19:35:36.035884  459056 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0510 19:35:36.037686  459056 out.go:235]   - Booting up control plane ...
	I0510 19:35:36.037791  459056 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0510 19:35:36.037869  459056 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0510 19:35:36.037934  459056 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0510 19:35:36.038008  459056 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0510 19:35:36.038231  459056 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0510 19:35:36.038305  459056 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0510 19:35:36.038398  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:35:36.038630  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:35:36.038727  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:35:36.038913  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:35:36.038987  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:35:36.039203  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:35:36.039326  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:35:36.039577  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:35:36.039655  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:35:36.039818  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:35:36.039825  459056 kubeadm.go:310] 
	I0510 19:35:36.039859  459056 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0510 19:35:36.039904  459056 kubeadm.go:310] 		timed out waiting for the condition
	I0510 19:35:36.039919  459056 kubeadm.go:310] 
	I0510 19:35:36.039948  459056 kubeadm.go:310] 	This error is likely caused by:
	I0510 19:35:36.039978  459056 kubeadm.go:310] 		- The kubelet is not running
	I0510 19:35:36.040071  459056 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0510 19:35:36.040078  459056 kubeadm.go:310] 
	I0510 19:35:36.040179  459056 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0510 19:35:36.040209  459056 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0510 19:35:36.040237  459056 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0510 19:35:36.040244  459056 kubeadm.go:310] 
	I0510 19:35:36.040337  459056 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0510 19:35:36.040419  459056 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0510 19:35:36.040442  459056 kubeadm.go:310] 
	I0510 19:35:36.040555  459056 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0510 19:35:36.040655  459056 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0510 19:35:36.040766  459056 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0510 19:35:36.040836  459056 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0510 19:35:36.040862  459056 kubeadm.go:310] 
	I0510 19:35:36.040906  459056 kubeadm.go:394] duration metric: took 7m59.202425038s to StartCluster
	I0510 19:35:36.040958  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:35:36.041023  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:35:36.097650  459056 cri.go:89] found id: ""
	I0510 19:35:36.097683  459056 logs.go:282] 0 containers: []
	W0510 19:35:36.097698  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:35:36.097708  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:35:36.097773  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:35:36.142587  459056 cri.go:89] found id: ""
	I0510 19:35:36.142619  459056 logs.go:282] 0 containers: []
	W0510 19:35:36.142627  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:35:36.142633  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:35:36.142702  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:35:36.186330  459056 cri.go:89] found id: ""
	I0510 19:35:36.186361  459056 logs.go:282] 0 containers: []
	W0510 19:35:36.186370  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:35:36.186376  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:35:36.186444  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:35:36.230965  459056 cri.go:89] found id: ""
	I0510 19:35:36.230994  459056 logs.go:282] 0 containers: []
	W0510 19:35:36.231001  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:35:36.231007  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:35:36.231062  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:35:36.276491  459056 cri.go:89] found id: ""
	I0510 19:35:36.276520  459056 logs.go:282] 0 containers: []
	W0510 19:35:36.276528  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:35:36.276534  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:35:36.276598  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:35:36.321937  459056 cri.go:89] found id: ""
	I0510 19:35:36.321971  459056 logs.go:282] 0 containers: []
	W0510 19:35:36.321980  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:35:36.321987  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:35:36.322050  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:35:36.364757  459056 cri.go:89] found id: ""
	I0510 19:35:36.364797  459056 logs.go:282] 0 containers: []
	W0510 19:35:36.364809  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:35:36.364818  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:35:36.364875  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:35:36.409488  459056 cri.go:89] found id: ""
	I0510 19:35:36.409523  459056 logs.go:282] 0 containers: []
	W0510 19:35:36.409532  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:35:36.409546  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:35:36.409561  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:35:36.462665  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:35:36.462705  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:35:36.478560  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:35:36.478591  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:35:36.555871  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:35:36.555904  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:35:36.555922  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:35:36.674559  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:35:36.674603  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0510 19:35:36.723413  459056 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0510 19:35:36.723488  459056 out.go:270] * 
	W0510 19:35:36.723574  459056 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0510 19:35:36.723589  459056 out.go:270] * 
	W0510 19:35:36.724458  459056 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0510 19:35:36.727493  459056 out.go:201] 
	W0510 19:35:36.728543  459056 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0510 19:35:36.728588  459056 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0510 19:35:36.728604  459056 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0510 19:35:36.729894  459056 out.go:201] 
	
	
	==> CRI-O <==
	May 10 19:44:40 old-k8s-version-089147 crio[815]: time="2025-05-10 19:44:40.391989435Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746906280391957298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e82b4ed1-bedc-4b85-b29d-cd8d18fe92ac name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:44:40 old-k8s-version-089147 crio[815]: time="2025-05-10 19:44:40.392966721Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=726e2a05-667c-416e-a43f-d1ad58214b3a name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:44:40 old-k8s-version-089147 crio[815]: time="2025-05-10 19:44:40.393041112Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=726e2a05-667c-416e-a43f-d1ad58214b3a name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:44:40 old-k8s-version-089147 crio[815]: time="2025-05-10 19:44:40.393073038Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=726e2a05-667c-416e-a43f-d1ad58214b3a name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:44:40 old-k8s-version-089147 crio[815]: time="2025-05-10 19:44:40.426814981Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=10d19996-2ef1-484c-8dce-31f71b33f105 name=/runtime.v1.RuntimeService/Version
	May 10 19:44:40 old-k8s-version-089147 crio[815]: time="2025-05-10 19:44:40.426889435Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=10d19996-2ef1-484c-8dce-31f71b33f105 name=/runtime.v1.RuntimeService/Version
	May 10 19:44:40 old-k8s-version-089147 crio[815]: time="2025-05-10 19:44:40.428335175Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e0856563-9c8e-4c48-b423-465e35a2faed name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:44:40 old-k8s-version-089147 crio[815]: time="2025-05-10 19:44:40.428769656Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746906280428746018,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e0856563-9c8e-4c48-b423-465e35a2faed name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:44:40 old-k8s-version-089147 crio[815]: time="2025-05-10 19:44:40.429387206Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bbad8d7f-f7ad-44f8-b8fa-49ea34aeabde name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:44:40 old-k8s-version-089147 crio[815]: time="2025-05-10 19:44:40.429434855Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bbad8d7f-f7ad-44f8-b8fa-49ea34aeabde name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:44:40 old-k8s-version-089147 crio[815]: time="2025-05-10 19:44:40.429485070Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=bbad8d7f-f7ad-44f8-b8fa-49ea34aeabde name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:44:40 old-k8s-version-089147 crio[815]: time="2025-05-10 19:44:40.464413375Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3380401d-be0c-4f2b-b284-b786f5f0bcf4 name=/runtime.v1.RuntimeService/Version
	May 10 19:44:40 old-k8s-version-089147 crio[815]: time="2025-05-10 19:44:40.464484546Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3380401d-be0c-4f2b-b284-b786f5f0bcf4 name=/runtime.v1.RuntimeService/Version
	May 10 19:44:40 old-k8s-version-089147 crio[815]: time="2025-05-10 19:44:40.465732357Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7a0820ba-0a19-428d-b358-d27459baf7f5 name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:44:40 old-k8s-version-089147 crio[815]: time="2025-05-10 19:44:40.466189997Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746906280466106210,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7a0820ba-0a19-428d-b358-d27459baf7f5 name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:44:40 old-k8s-version-089147 crio[815]: time="2025-05-10 19:44:40.466688918Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=05d1c20f-08c2-41bc-9b4d-c915fa9f586a name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:44:40 old-k8s-version-089147 crio[815]: time="2025-05-10 19:44:40.466732877Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=05d1c20f-08c2-41bc-9b4d-c915fa9f586a name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:44:40 old-k8s-version-089147 crio[815]: time="2025-05-10 19:44:40.466774168Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=05d1c20f-08c2-41bc-9b4d-c915fa9f586a name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:44:40 old-k8s-version-089147 crio[815]: time="2025-05-10 19:44:40.500215945Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5c29b514-cb4b-4845-af2a-3697e4c33f67 name=/runtime.v1.RuntimeService/Version
	May 10 19:44:40 old-k8s-version-089147 crio[815]: time="2025-05-10 19:44:40.500301813Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5c29b514-cb4b-4845-af2a-3697e4c33f67 name=/runtime.v1.RuntimeService/Version
	May 10 19:44:40 old-k8s-version-089147 crio[815]: time="2025-05-10 19:44:40.501987508Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=290696d7-5e07-4fc1-9f73-a6e03d4e11db name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:44:40 old-k8s-version-089147 crio[815]: time="2025-05-10 19:44:40.502591055Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746906280502561570,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=290696d7-5e07-4fc1-9f73-a6e03d4e11db name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:44:40 old-k8s-version-089147 crio[815]: time="2025-05-10 19:44:40.503987934Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ce633aa9-744d-4b13-b957-400db820b398 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:44:40 old-k8s-version-089147 crio[815]: time="2025-05-10 19:44:40.504056182Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ce633aa9-744d-4b13-b957-400db820b398 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:44:40 old-k8s-version-089147 crio[815]: time="2025-05-10 19:44:40.504088341Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ce633aa9-744d-4b13-b957-400db820b398 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[May10 19:27] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.000002] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.001401] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.001737] (rpcbind)[143]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.974355] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000007] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.102715] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.103174] kauditd_printk_skb: 74 callbacks suppressed
	[ +14.627732] kauditd_printk_skb: 46 callbacks suppressed
	[May10 19:33] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:44:40 up 17 min,  0 user,  load average: 0.05, 0.05, 0.06
	Linux old-k8s-version-089147 5.10.207 #1 SMP Fri May 9 03:49:24 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2024.11.2"
	
	
	==> kubelet <==
	May 10 19:44:38 old-k8s-version-089147 kubelet[7956]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.DefaultWatchErrorHandler(0xc000cb82a0, 0x4f04d00, 0xc0009a3ff0)
	May 10 19:44:38 old-k8s-version-089147 kubelet[7956]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	May 10 19:44:38 old-k8s-version-089147 kubelet[7956]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	May 10 19:44:38 old-k8s-version-089147 kubelet[7956]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	May 10 19:44:38 old-k8s-version-089147 kubelet[7956]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000259ef0)
	May 10 19:44:38 old-k8s-version-089147 kubelet[7956]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	May 10 19:44:38 old-k8s-version-089147 kubelet[7956]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009efef0, 0x4f0ac20, 0xc00009c230, 0x1, 0xc0000a60c0)
	May 10 19:44:38 old-k8s-version-089147 kubelet[7956]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	May 10 19:44:38 old-k8s-version-089147 kubelet[7956]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000cb82a0, 0xc0000a60c0)
	May 10 19:44:38 old-k8s-version-089147 kubelet[7956]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	May 10 19:44:38 old-k8s-version-089147 kubelet[7956]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	May 10 19:44:38 old-k8s-version-089147 kubelet[7956]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	May 10 19:44:38 old-k8s-version-089147 kubelet[7956]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0009cab80, 0xc00096fc00)
	May 10 19:44:38 old-k8s-version-089147 kubelet[7956]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	May 10 19:44:38 old-k8s-version-089147 kubelet[7956]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	May 10 19:44:38 old-k8s-version-089147 kubelet[7956]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	May 10 19:44:38 old-k8s-version-089147 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	May 10 19:44:38 old-k8s-version-089147 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	May 10 19:44:39 old-k8s-version-089147 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	May 10 19:44:39 old-k8s-version-089147 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	May 10 19:44:39 old-k8s-version-089147 kubelet[7966]: I0510 19:44:39.106686    7966 server.go:416] Version: v1.20.0
	May 10 19:44:39 old-k8s-version-089147 kubelet[7966]: I0510 19:44:39.107251    7966 server.go:837] Client rotation is on, will bootstrap in background
	May 10 19:44:39 old-k8s-version-089147 kubelet[7966]: I0510 19:44:39.109533    7966 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	May 10 19:44:39 old-k8s-version-089147 kubelet[7966]: W0510 19:44:39.110513    7966 manager.go:159] Cannot detect current cgroup on cgroup v2
	May 10 19:44:39 old-k8s-version-089147 kubelet[7966]: I0510 19:44:39.111731    7966 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-089147 -n old-k8s-version-089147
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-089147 -n old-k8s-version-089147: exit status 2 (248.900347ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-089147" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (369.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:44:45.392114  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/no-preload-433152/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:44:53.722685  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/default-k8s-diff-port-544623/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:45:20.403074  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/custom-flannel-380533/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:46:01.749376  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/enable-default-cni-380533/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:46:10.804843  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/flannel-380533/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:46:34.405335  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/bridge-380533/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:46:51.564944  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:48:22.327955  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/no-preload-433152/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:48:26.359358  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:48:30.656531  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/default-k8s-diff-port-544623/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:48:37.127074  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kindnet-380533/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:48:48.489979  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:49:25.378358  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/calico-380533/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:49:37.810111  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
E0510 19:50:20.403468  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/custom-flannel-380533/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.225:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.225:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-089147 -n old-k8s-version-089147
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-089147 -n old-k8s-version-089147: exit status 2 (256.094434ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "old-k8s-version-089147" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-089147 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-089147 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.582µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-089147 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-089147 -n old-k8s-version-089147
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-089147 -n old-k8s-version-089147: exit status 2 (242.772975ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-089147 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-089147 logs -n 25: (1.117076168s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p default-k8s-diff-port-544623       | default-k8s-diff-port-544623 | jenkins | v1.35.0 | 10 May 25 19:25 UTC | 10 May 25 19:25 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-544623 | jenkins | v1.35.0 | 10 May 25 19:25 UTC | 10 May 25 19:26 UTC |
	|         | default-k8s-diff-port-544623                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-089147        | old-k8s-version-089147       | jenkins | v1.35.0 | 10 May 25 19:25 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-483140            | embed-certs-483140           | jenkins | v1.35.0 | 10 May 25 19:25 UTC | 10 May 25 19:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-483140                                  | embed-certs-483140           | jenkins | v1.35.0 | 10 May 25 19:25 UTC | 10 May 25 19:27 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | no-preload-433152 image list                           | no-preload-433152            | jenkins | v1.35.0 | 10 May 25 19:26 UTC | 10 May 25 19:26 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-433152                                   | no-preload-433152            | jenkins | v1.35.0 | 10 May 25 19:26 UTC | 10 May 25 19:26 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-433152                                   | no-preload-433152            | jenkins | v1.35.0 | 10 May 25 19:26 UTC | 10 May 25 19:26 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-433152                                   | no-preload-433152            | jenkins | v1.35.0 | 10 May 25 19:26 UTC | 10 May 25 19:26 UTC |
	| delete  | -p no-preload-433152                                   | no-preload-433152            | jenkins | v1.35.0 | 10 May 25 19:26 UTC | 10 May 25 19:26 UTC |
	| image   | default-k8s-diff-port-544623                           | default-k8s-diff-port-544623 | jenkins | v1.35.0 | 10 May 25 19:26 UTC | 10 May 25 19:26 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-544623 | jenkins | v1.35.0 | 10 May 25 19:26 UTC | 10 May 25 19:26 UTC |
	|         | default-k8s-diff-port-544623                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-544623 | jenkins | v1.35.0 | 10 May 25 19:26 UTC | 10 May 25 19:26 UTC |
	|         | default-k8s-diff-port-544623                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-544623 | jenkins | v1.35.0 | 10 May 25 19:26 UTC | 10 May 25 19:26 UTC |
	|         | default-k8s-diff-port-544623                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-544623 | jenkins | v1.35.0 | 10 May 25 19:26 UTC | 10 May 25 19:26 UTC |
	|         | default-k8s-diff-port-544623                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-089147                              | old-k8s-version-089147       | jenkins | v1.35.0 | 10 May 25 19:27 UTC | 10 May 25 19:27 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-089147             | old-k8s-version-089147       | jenkins | v1.35.0 | 10 May 25 19:27 UTC | 10 May 25 19:27 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-089147                              | old-k8s-version-089147       | jenkins | v1.35.0 | 10 May 25 19:27 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-483140                 | embed-certs-483140           | jenkins | v1.35.0 | 10 May 25 19:27 UTC | 10 May 25 19:27 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-483140                                  | embed-certs-483140           | jenkins | v1.35.0 | 10 May 25 19:27 UTC | 10 May 25 19:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0                           |                              |         |         |                     |                     |
	| image   | embed-certs-483140 image list                          | embed-certs-483140           | jenkins | v1.35.0 | 10 May 25 19:28 UTC | 10 May 25 19:28 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-483140                                  | embed-certs-483140           | jenkins | v1.35.0 | 10 May 25 19:28 UTC | 10 May 25 19:28 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-483140                                  | embed-certs-483140           | jenkins | v1.35.0 | 10 May 25 19:28 UTC | 10 May 25 19:28 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-483140                                  | embed-certs-483140           | jenkins | v1.35.0 | 10 May 25 19:28 UTC | 10 May 25 19:28 UTC |
	| delete  | -p embed-certs-483140                                  | embed-certs-483140           | jenkins | v1.35.0 | 10 May 25 19:28 UTC | 10 May 25 19:28 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/05/10 19:27:23
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0510 19:27:23.885144  459268 out.go:345] Setting OutFile to fd 1 ...
	I0510 19:27:23.885480  459268 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 19:27:23.885497  459268 out.go:358] Setting ErrFile to fd 2...
	I0510 19:27:23.885501  459268 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 19:27:23.885719  459268 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-388787/.minikube/bin
	I0510 19:27:23.886293  459268 out.go:352] Setting JSON to false
	I0510 19:27:23.887364  459268 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":32992,"bootTime":1746872252,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 19:27:23.887483  459268 start.go:140] virtualization: kvm guest
	I0510 19:27:23.889943  459268 out.go:177] * [embed-certs-483140] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 19:27:23.891957  459268 notify.go:220] Checking for updates...
	I0510 19:27:23.891994  459268 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 19:27:23.894190  459268 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 19:27:23.896124  459268 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-388787/kubeconfig
	I0510 19:27:23.897923  459268 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-388787/.minikube
	I0510 19:27:23.899523  459268 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 19:27:23.901199  459268 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 19:27:23.903392  459268 config.go:182] Loaded profile config "embed-certs-483140": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 19:27:23.904060  459268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:27:23.904180  459268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:27:23.920190  459268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45251
	I0510 19:27:23.920695  459268 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:27:23.921217  459268 main.go:141] libmachine: Using API Version  1
	I0510 19:27:23.921240  459268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:27:23.921569  459268 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:27:23.921756  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	I0510 19:27:23.922029  459268 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 19:27:23.922349  459268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:27:23.922417  459268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:27:23.938240  459268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41081
	I0510 19:27:23.938810  459268 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:27:23.939433  459268 main.go:141] libmachine: Using API Version  1
	I0510 19:27:23.939468  459268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:27:23.939903  459268 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:27:23.940145  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	I0510 19:27:23.978372  459268 out.go:177] * Using the kvm2 driver based on existing profile
	I0510 19:27:20.282773  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:20.283336  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | unable to find current IP address of domain old-k8s-version-089147 in network mk-old-k8s-version-089147
	I0510 19:27:20.283406  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:27:20.283343  459091 retry.go:31] will retry after 3.189593727s: waiting for domain to come up
	I0510 19:27:23.618741  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:23.619115  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | unable to find current IP address of domain old-k8s-version-089147 in network mk-old-k8s-version-089147
	I0510 19:27:23.619143  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | I0510 19:27:23.619075  459091 retry.go:31] will retry after 3.237680008s: waiting for domain to come up
	I0510 19:27:23.979818  459268 start.go:304] selected driver: kvm2
	I0510 19:27:23.979843  459268 start.go:908] validating driver "kvm2" against &{Name:embed-certs-483140 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.33.0 ClusterName:embed-certs-483140 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.231 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Su
bnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 19:27:23.979977  459268 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 19:27:23.980756  459268 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 19:27:23.980839  459268 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20720-388787/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0510 19:27:23.997236  459268 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0510 19:27:23.997883  459268 start_flags.go:975] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0510 19:27:23.997935  459268 cni.go:84] Creating CNI manager for ""
	I0510 19:27:23.998008  459268 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0510 19:27:23.998078  459268 start.go:347] cluster config:
	{Name:embed-certs-483140 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:embed-certs-483140 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.231 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 19:27:23.998238  459268 iso.go:125] acquiring lock: {Name:mk19640015999219180c6685480547adf0c02201 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 19:27:24.000161  459268 out.go:177] * Starting "embed-certs-483140" primary control-plane node in "embed-certs-483140" cluster
	I0510 19:27:24.001573  459268 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
	I0510 19:27:24.001646  459268 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4
	I0510 19:27:24.001656  459268 cache.go:56] Caching tarball of preloaded images
	I0510 19:27:24.001770  459268 preload.go:172] Found /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0510 19:27:24.001787  459268 cache.go:59] Finished verifying existence of preloaded tar for v1.33.0 on crio
	I0510 19:27:24.001913  459268 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/embed-certs-483140/config.json ...
	I0510 19:27:24.002132  459268 start.go:360] acquireMachinesLock for embed-certs-483140: {Name:mk11499d7756d503a7a24339ad1a7f9ab9dc0fab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0510 19:27:28.400997  459268 start.go:364] duration metric: took 4.398817522s to acquireMachinesLock for "embed-certs-483140"
	I0510 19:27:28.401047  459268 start.go:96] Skipping create...Using existing machine configuration
	I0510 19:27:28.401054  459268 fix.go:54] fixHost starting: 
	I0510 19:27:28.401464  459268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:27:28.401519  459268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:27:28.419712  459268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44069
	I0510 19:27:28.420231  459268 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:27:28.420865  459268 main.go:141] libmachine: Using API Version  1
	I0510 19:27:28.420897  459268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:27:28.421274  459268 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:27:28.421549  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	I0510 19:27:28.421748  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetState
	I0510 19:27:28.423533  459268 fix.go:112] recreateIfNeeded on embed-certs-483140: state=Stopped err=<nil>
	I0510 19:27:28.423563  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	W0510 19:27:28.423744  459268 fix.go:138] unexpected machine state, will restart: <nil>
	I0510 19:27:28.425472  459268 out.go:177] * Restarting existing kvm2 VM for "embed-certs-483140" ...
	I0510 19:27:28.426613  459268 main.go:141] libmachine: (embed-certs-483140) Calling .Start
	I0510 19:27:28.426810  459268 main.go:141] libmachine: (embed-certs-483140) starting domain...
	I0510 19:27:28.426829  459268 main.go:141] libmachine: (embed-certs-483140) ensuring networks are active...
	I0510 19:27:28.427619  459268 main.go:141] libmachine: (embed-certs-483140) Ensuring network default is active
	I0510 19:27:28.428029  459268 main.go:141] libmachine: (embed-certs-483140) Ensuring network mk-embed-certs-483140 is active
	I0510 19:27:28.428436  459268 main.go:141] libmachine: (embed-certs-483140) getting domain XML...
	I0510 19:27:28.429330  459268 main.go:141] libmachine: (embed-certs-483140) creating domain...
	I0510 19:27:26.860579  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:26.861169  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has current primary IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:26.861235  459056 main.go:141] libmachine: (old-k8s-version-089147) found domain IP: 192.168.50.225
	I0510 19:27:26.861263  459056 main.go:141] libmachine: (old-k8s-version-089147) reserving static IP address...
	I0510 19:27:26.861678  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "old-k8s-version-089147", mac: "52:54:00:c5:c6:86", ip: "192.168.50.225"} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:26.861748  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | skip adding static IP to network mk-old-k8s-version-089147 - found existing host DHCP lease matching {name: "old-k8s-version-089147", mac: "52:54:00:c5:c6:86", ip: "192.168.50.225"}
	I0510 19:27:26.861769  459056 main.go:141] libmachine: (old-k8s-version-089147) reserved static IP address 192.168.50.225 for domain old-k8s-version-089147
	I0510 19:27:26.861785  459056 main.go:141] libmachine: (old-k8s-version-089147) waiting for SSH...
	I0510 19:27:26.861791  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | Getting to WaitForSSH function...
	I0510 19:27:26.863716  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:26.864074  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:26.864105  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:26.864224  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | Using SSH client type: external
	I0510 19:27:26.864249  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | Using SSH private key: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/old-k8s-version-089147/id_rsa (-rw-------)
	I0510 19:27:26.864275  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.225 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20720-388787/.minikube/machines/old-k8s-version-089147/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0510 19:27:26.864284  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | About to run SSH command:
	I0510 19:27:26.864292  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | exit 0
	I0510 19:27:26.992149  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | SSH cmd err, output: <nil>: 
	I0510 19:27:26.992596  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetConfigRaw
	I0510 19:27:26.993291  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetIP
	I0510 19:27:26.996245  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:26.996734  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:26.996760  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:26.996987  459056 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/config.json ...
	I0510 19:27:26.997231  459056 machine.go:93] provisionDockerMachine start ...
	I0510 19:27:26.997257  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .DriverName
	I0510 19:27:26.997484  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:26.999968  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.000439  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:27.000476  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.000707  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:27:27.000924  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:27.001051  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:27.001195  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:27:27.001309  459056 main.go:141] libmachine: Using SSH client type: native
	I0510 19:27:27.001588  459056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.225 22 <nil> <nil>}
	I0510 19:27:27.001603  459056 main.go:141] libmachine: About to run SSH command:
	hostname
	I0510 19:27:27.120348  459056 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0510 19:27:27.120385  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetMachineName
	I0510 19:27:27.120685  459056 buildroot.go:166] provisioning hostname "old-k8s-version-089147"
	I0510 19:27:27.120712  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetMachineName
	I0510 19:27:27.120937  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:27.123906  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.124166  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:27.124192  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.124346  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:27:27.124515  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:27.124641  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:27.124770  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:27:27.124903  459056 main.go:141] libmachine: Using SSH client type: native
	I0510 19:27:27.125130  459056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.225 22 <nil> <nil>}
	I0510 19:27:27.125146  459056 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-089147 && echo "old-k8s-version-089147" | sudo tee /etc/hostname
	I0510 19:27:27.254277  459056 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-089147
	
	I0510 19:27:27.254306  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:27.257358  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.257763  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:27.257793  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.258010  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:27:27.258221  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:27.258392  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:27.258550  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:27:27.258746  459056 main.go:141] libmachine: Using SSH client type: native
	I0510 19:27:27.258987  459056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.225 22 <nil> <nil>}
	I0510 19:27:27.259004  459056 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-089147' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-089147/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-089147' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0510 19:27:27.383141  459056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0510 19:27:27.383177  459056 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20720-388787/.minikube CaCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20720-388787/.minikube}
	I0510 19:27:27.383245  459056 buildroot.go:174] setting up certificates
	I0510 19:27:27.383268  459056 provision.go:84] configureAuth start
	I0510 19:27:27.383282  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetMachineName
	I0510 19:27:27.383632  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetIP
	I0510 19:27:27.386412  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.386733  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:27.386760  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.386920  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:27.388990  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.389308  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:27.389346  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.389489  459056 provision.go:143] copyHostCerts
	I0510 19:27:27.389586  459056 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem, removing ...
	I0510 19:27:27.389611  459056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem
	I0510 19:27:27.389674  459056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem (1675 bytes)
	I0510 19:27:27.389763  459056 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem, removing ...
	I0510 19:27:27.389771  459056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem
	I0510 19:27:27.389797  459056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem (1078 bytes)
	I0510 19:27:27.389845  459056 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem, removing ...
	I0510 19:27:27.389852  459056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem
	I0510 19:27:27.389873  459056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem (1123 bytes)
	I0510 19:27:27.389917  459056 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-089147 san=[127.0.0.1 192.168.50.225 localhost minikube old-k8s-version-089147]
	I0510 19:27:27.706220  459056 provision.go:177] copyRemoteCerts
	I0510 19:27:27.706291  459056 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0510 19:27:27.706321  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:27.709279  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.709662  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:27.709704  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.709901  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:27:27.710147  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:27.710312  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:27:27.710453  459056 sshutil.go:53] new ssh client: &{IP:192.168.50.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/old-k8s-version-089147/id_rsa Username:docker}
	I0510 19:27:27.796192  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0510 19:27:27.826223  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0510 19:27:27.856165  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0510 19:27:27.885803  459056 provision.go:87] duration metric: took 502.517549ms to configureAuth
	I0510 19:27:27.885844  459056 buildroot.go:189] setting minikube options for container-runtime
	I0510 19:27:27.886049  459056 config.go:182] Loaded profile config "old-k8s-version-089147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0510 19:27:27.886126  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:27.888892  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.889274  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:27.889304  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:27.889432  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:27:27.889662  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:27.889842  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:27.890001  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:27:27.890137  459056 main.go:141] libmachine: Using SSH client type: native
	I0510 19:27:27.890398  459056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.225 22 <nil> <nil>}
	I0510 19:27:27.890414  459056 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0510 19:27:28.145754  459056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0510 19:27:28.145780  459056 machine.go:96] duration metric: took 1.148533327s to provisionDockerMachine
	I0510 19:27:28.145793  459056 start.go:293] postStartSetup for "old-k8s-version-089147" (driver="kvm2")
	I0510 19:27:28.145805  459056 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0510 19:27:28.145843  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .DriverName
	I0510 19:27:28.146213  459056 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0510 19:27:28.146241  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:28.148935  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.149310  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:28.149338  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.149442  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:27:28.149630  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:28.149794  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:27:28.149969  459056 sshutil.go:53] new ssh client: &{IP:192.168.50.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/old-k8s-version-089147/id_rsa Username:docker}
	I0510 19:27:28.237429  459056 ssh_runner.go:195] Run: cat /etc/os-release
	I0510 19:27:28.242504  459056 info.go:137] Remote host: Buildroot 2024.11.2
	I0510 19:27:28.242535  459056 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-388787/.minikube/addons for local assets ...
	I0510 19:27:28.242600  459056 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-388787/.minikube/files for local assets ...
	I0510 19:27:28.242694  459056 filesync.go:149] local asset: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem -> 3959802.pem in /etc/ssl/certs
	I0510 19:27:28.242795  459056 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0510 19:27:28.255581  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem --> /etc/ssl/certs/3959802.pem (1708 bytes)
	I0510 19:27:28.285383  459056 start.go:296] duration metric: took 139.572888ms for postStartSetup
	I0510 19:27:28.285430  459056 fix.go:56] duration metric: took 19.171545731s for fixHost
	I0510 19:27:28.285452  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:28.288861  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.289256  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:28.289288  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.289472  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:27:28.289747  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:28.289968  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:28.290122  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:27:28.290275  459056 main.go:141] libmachine: Using SSH client type: native
	I0510 19:27:28.290504  459056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.50.225 22 <nil> <nil>}
	I0510 19:27:28.290514  459056 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0510 19:27:28.400790  459056 main.go:141] libmachine: SSH cmd err, output: <nil>: 1746905248.354737003
	
	I0510 19:27:28.400820  459056 fix.go:216] guest clock: 1746905248.354737003
	I0510 19:27:28.400830  459056 fix.go:229] Guest: 2025-05-10 19:27:28.354737003 +0000 UTC Remote: 2025-05-10 19:27:28.285433906 +0000 UTC m=+19.332417949 (delta=69.303097ms)
	I0510 19:27:28.400874  459056 fix.go:200] guest clock delta is within tolerance: 69.303097ms
	I0510 19:27:28.400901  459056 start.go:83] releasing machines lock for "old-k8s-version-089147", held for 19.287012994s
	I0510 19:27:28.400943  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .DriverName
	I0510 19:27:28.401246  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetIP
	I0510 19:27:28.404469  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.404985  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:28.405012  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.405227  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .DriverName
	I0510 19:27:28.405870  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .DriverName
	I0510 19:27:28.406067  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .DriverName
	I0510 19:27:28.406182  459056 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0510 19:27:28.406225  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:28.406371  459056 ssh_runner.go:195] Run: cat /version.json
	I0510 19:27:28.406414  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHHostname
	I0510 19:27:28.409133  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.409451  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.409485  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:28.409508  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.409700  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:27:28.409895  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:28.409939  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:28.409971  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:28.410074  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:27:28.410144  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHPort
	I0510 19:27:28.410238  459056 sshutil.go:53] new ssh client: &{IP:192.168.50.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/old-k8s-version-089147/id_rsa Username:docker}
	I0510 19:27:28.410313  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHKeyPath
	I0510 19:27:28.410431  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetSSHUsername
	I0510 19:27:28.410556  459056 sshutil.go:53] new ssh client: &{IP:192.168.50.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/old-k8s-version-089147/id_rsa Username:docker}
	I0510 19:27:28.522881  459056 ssh_runner.go:195] Run: systemctl --version
	I0510 19:27:28.529679  459056 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0510 19:27:28.679208  459056 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0510 19:27:28.686449  459056 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0510 19:27:28.686542  459056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0510 19:27:28.706391  459056 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0510 19:27:28.706422  459056 start.go:495] detecting cgroup driver to use...
	I0510 19:27:28.706502  459056 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0510 19:27:28.725500  459056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0510 19:27:28.743141  459056 docker.go:225] disabling cri-docker service (if available) ...
	I0510 19:27:28.743218  459056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0510 19:27:28.763489  459056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0510 19:27:28.782362  459056 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0510 19:27:28.930849  459056 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0510 19:27:29.145684  459056 docker.go:241] disabling docker service ...
	I0510 19:27:29.145777  459056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0510 19:27:29.162572  459056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0510 19:27:29.177892  459056 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0510 19:27:29.337238  459056 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0510 19:27:29.498230  459056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0510 19:27:29.515221  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0510 19:27:29.539326  459056 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0510 19:27:29.539400  459056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:27:29.551931  459056 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0510 19:27:29.552027  459056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:27:29.563727  459056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:27:29.576495  459056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:27:29.589274  459056 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0510 19:27:29.602567  459056 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0510 19:27:29.613569  459056 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0510 19:27:29.613666  459056 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0510 19:27:29.631475  459056 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0510 19:27:29.646992  459056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 19:27:29.783415  459056 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0510 19:27:29.908799  459056 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0510 19:27:29.908871  459056 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0510 19:27:29.916611  459056 start.go:563] Will wait 60s for crictl version
	I0510 19:27:29.916678  459056 ssh_runner.go:195] Run: which crictl
	I0510 19:27:29.922342  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0510 19:27:29.970957  459056 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0510 19:27:29.971075  459056 ssh_runner.go:195] Run: crio --version
	I0510 19:27:30.013260  459056 ssh_runner.go:195] Run: crio --version
	I0510 19:27:30.045551  459056 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0510 19:27:29.772968  459268 main.go:141] libmachine: (embed-certs-483140) waiting for IP...
	I0510 19:27:29.773852  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:29.774282  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:29.774439  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:29.774308  459321 retry.go:31] will retry after 290.306519ms: waiting for domain to come up
	I0510 19:27:30.066100  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:30.066611  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:30.066646  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:30.066565  459321 retry.go:31] will retry after 275.607152ms: waiting for domain to come up
	I0510 19:27:30.344347  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:30.345208  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:30.345242  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:30.345116  459321 retry.go:31] will retry after 431.583413ms: waiting for domain to come up
	I0510 19:27:30.779076  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:30.779843  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:30.779882  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:30.779780  459321 retry.go:31] will retry after 472.118095ms: waiting for domain to come up
	I0510 19:27:31.253280  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:31.253935  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:31.253963  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:31.253906  459321 retry.go:31] will retry after 565.053718ms: waiting for domain to come up
	I0510 19:27:31.820497  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:31.821065  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:31.821097  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:31.821039  459321 retry.go:31] will retry after 714.111732ms: waiting for domain to come up
	I0510 19:27:32.536460  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:32.537050  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:32.537080  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:32.537000  459321 retry.go:31] will retry after 1.161843323s: waiting for domain to come up
	I0510 19:27:33.701019  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:33.701583  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:33.701613  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:33.701550  459321 retry.go:31] will retry after 996.121621ms: waiting for domain to come up
	I0510 19:27:30.046696  459056 main.go:141] libmachine: (old-k8s-version-089147) Calling .GetIP
	I0510 19:27:30.049916  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:30.050298  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:c6:86", ip: ""} in network mk-old-k8s-version-089147: {Iface:virbr2 ExpiryTime:2025-05-10 20:27:21 +0000 UTC Type:0 Mac:52:54:00:c5:c6:86 Iaid: IPaddr:192.168.50.225 Prefix:24 Hostname:old-k8s-version-089147 Clientid:01:52:54:00:c5:c6:86}
	I0510 19:27:30.050343  459056 main.go:141] libmachine: (old-k8s-version-089147) DBG | domain old-k8s-version-089147 has defined IP address 192.168.50.225 and MAC address 52:54:00:c5:c6:86 in network mk-old-k8s-version-089147
	I0510 19:27:30.050593  459056 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0510 19:27:30.055795  459056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 19:27:30.072862  459056 kubeadm.go:875] updating cluster {Name:old-k8s-version-089147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-089147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.225 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0510 19:27:30.073023  459056 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0510 19:27:30.073092  459056 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 19:27:30.136655  459056 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0510 19:27:30.136733  459056 ssh_runner.go:195] Run: which lz4
	I0510 19:27:30.141756  459056 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0510 19:27:30.146784  459056 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0510 19:27:30.146832  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0510 19:27:32.084982  459056 crio.go:462] duration metric: took 1.943253158s to copy over tarball
	I0510 19:27:32.085084  459056 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0510 19:27:34.700012  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:34.700655  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:34.700709  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:34.700617  459321 retry.go:31] will retry after 1.33170267s: waiting for domain to come up
	I0510 19:27:36.033761  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:36.034412  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:36.034447  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:36.034366  459321 retry.go:31] will retry after 2.129430607s: waiting for domain to come up
	I0510 19:27:38.166385  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:38.167048  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:38.167074  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:38.167010  459321 retry.go:31] will retry after 1.898585133s: waiting for domain to come up
	I0510 19:27:34.680248  459056 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.595132142s)
	I0510 19:27:34.680275  459056 crio.go:469] duration metric: took 2.595258666s to extract the tarball
	I0510 19:27:34.680284  459056 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0510 19:27:34.725856  459056 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 19:27:34.769530  459056 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0510 19:27:34.769567  459056 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0510 19:27:34.769639  459056 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0510 19:27:34.769682  459056 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:27:34.769696  459056 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:27:34.769712  459056 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0510 19:27:34.769686  459056 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0510 19:27:34.769766  459056 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:27:34.769779  459056 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0510 19:27:34.769798  459056 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:27:34.771393  459056 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:27:34.771413  459056 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0510 19:27:34.771433  459056 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0510 19:27:34.771391  459056 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:27:34.771454  459056 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:27:34.771457  459056 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:27:34.771488  459056 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0510 19:27:34.771522  459056 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0510 19:27:34.903898  459056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:27:34.909532  459056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0510 19:27:34.909958  459056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:27:34.920714  459056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:27:34.927038  459056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0510 19:27:34.932543  459056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0510 19:27:34.939391  459056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:27:35.035164  459056 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0510 19:27:35.035225  459056 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:27:35.035308  459056 ssh_runner.go:195] Run: which crictl
	I0510 19:27:35.046705  459056 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0510 19:27:35.046773  459056 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0510 19:27:35.046831  459056 ssh_runner.go:195] Run: which crictl
	I0510 19:27:35.102600  459056 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0510 19:27:35.102657  459056 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:27:35.102728  459056 ssh_runner.go:195] Run: which crictl
	I0510 19:27:35.114127  459056 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0510 19:27:35.114197  459056 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:27:35.114220  459056 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0510 19:27:35.114255  459056 ssh_runner.go:195] Run: which crictl
	I0510 19:27:35.114262  459056 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0510 19:27:35.114305  459056 ssh_runner.go:195] Run: which crictl
	I0510 19:27:35.114526  459056 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0510 19:27:35.114562  459056 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0510 19:27:35.114596  459056 ssh_runner.go:195] Run: which crictl
	I0510 19:27:35.135454  459056 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0510 19:27:35.135500  459056 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:27:35.135549  459056 ssh_runner.go:195] Run: which crictl
	I0510 19:27:35.135570  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:27:35.135627  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0510 19:27:35.135673  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:27:35.135728  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0510 19:27:35.135753  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:27:35.135782  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0510 19:27:35.246929  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:27:35.246999  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0510 19:27:35.304129  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:27:35.304183  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:27:35.304193  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:27:35.304231  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0510 19:27:35.304278  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0510 19:27:35.381894  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0510 19:27:35.381939  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:27:35.482712  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0510 19:27:35.482788  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0510 19:27:35.482823  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0510 19:27:35.482858  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0510 19:27:35.482947  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0510 19:27:35.526146  459056 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0510 19:27:35.557215  459056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0510 19:27:35.649079  459056 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0510 19:27:35.649160  459056 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0510 19:27:35.649222  459056 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0510 19:27:35.649256  459056 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0510 19:27:35.649351  459056 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0510 19:27:35.667931  459056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0510 19:27:35.671336  459056 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0510 19:27:35.818843  459056 cache_images.go:92] duration metric: took 1.049254698s to LoadCachedImages
	W0510 19:27:35.818925  459056 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20720-388787/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0510 19:27:35.818936  459056 kubeadm.go:926] updating node { 192.168.50.225 8443 v1.20.0 crio true true} ...
	I0510 19:27:35.819071  459056 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-089147 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-089147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0510 19:27:35.819178  459056 ssh_runner.go:195] Run: crio config
	I0510 19:27:35.871053  459056 cni.go:84] Creating CNI manager for ""
	I0510 19:27:35.871078  459056 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0510 19:27:35.871088  459056 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0510 19:27:35.871108  459056 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.225 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-089147 NodeName:old-k8s-version-089147 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.225"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.225 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0510 19:27:35.871325  459056 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.225
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-089147"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.225
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.225"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0510 19:27:35.871410  459056 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0510 19:27:35.884778  459056 binaries.go:44] Found k8s binaries, skipping transfer
	I0510 19:27:35.884850  459056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0510 19:27:35.897755  459056 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0510 19:27:35.920392  459056 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0510 19:27:35.944066  459056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0510 19:27:35.969513  459056 ssh_runner.go:195] Run: grep 192.168.50.225	control-plane.minikube.internal$ /etc/hosts
	I0510 19:27:35.973968  459056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.225	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 19:27:35.989113  459056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 19:27:36.126144  459056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 19:27:36.161368  459056 certs.go:68] Setting up /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147 for IP: 192.168.50.225
	I0510 19:27:36.161393  459056 certs.go:194] generating shared ca certs ...
	I0510 19:27:36.161414  459056 certs.go:226] acquiring lock for ca certs: {Name:mk8db74782205da4ac57ef815dd495cda255251a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:27:36.161602  459056 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20720-388787/.minikube/ca.key
	I0510 19:27:36.161660  459056 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.key
	I0510 19:27:36.161675  459056 certs.go:256] generating profile certs ...
	I0510 19:27:36.161815  459056 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/client.key
	I0510 19:27:36.161897  459056 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/apiserver.key.3362ca92
	I0510 19:27:36.161951  459056 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/proxy-client.key
	I0510 19:27:36.162093  459056 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980.pem (1338 bytes)
	W0510 19:27:36.162134  459056 certs.go:480] ignoring /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980_empty.pem, impossibly tiny 0 bytes
	I0510 19:27:36.162148  459056 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem (1679 bytes)
	I0510 19:27:36.162186  459056 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem (1078 bytes)
	I0510 19:27:36.162219  459056 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem (1123 bytes)
	I0510 19:27:36.162251  459056 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem (1675 bytes)
	I0510 19:27:36.162305  459056 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem (1708 bytes)
	I0510 19:27:36.163029  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0510 19:27:36.207434  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0510 19:27:36.254337  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0510 19:27:36.302029  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0510 19:27:36.340123  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0510 19:27:36.372457  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0510 19:27:36.417695  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0510 19:27:36.454687  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/old-k8s-version-089147/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0510 19:27:36.491453  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0510 19:27:36.527708  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980.pem --> /usr/share/ca-certificates/395980.pem (1338 bytes)
	I0510 19:27:36.566188  459056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem --> /usr/share/ca-certificates/3959802.pem (1708 bytes)
	I0510 19:27:36.605695  459056 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0510 19:27:36.633416  459056 ssh_runner.go:195] Run: openssl version
	I0510 19:27:36.640812  459056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0510 19:27:36.655287  459056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0510 19:27:36.660996  459056 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 10 17:52 /usr/share/ca-certificates/minikubeCA.pem
	I0510 19:27:36.661078  459056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0510 19:27:36.671509  459056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0510 19:27:36.685341  459056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/395980.pem && ln -fs /usr/share/ca-certificates/395980.pem /etc/ssl/certs/395980.pem"
	I0510 19:27:36.701195  459056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/395980.pem
	I0510 19:27:36.707338  459056 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 10 18:00 /usr/share/ca-certificates/395980.pem
	I0510 19:27:36.707426  459056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/395980.pem
	I0510 19:27:36.715832  459056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/395980.pem /etc/ssl/certs/51391683.0"
	I0510 19:27:36.730499  459056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3959802.pem && ln -fs /usr/share/ca-certificates/3959802.pem /etc/ssl/certs/3959802.pem"
	I0510 19:27:36.745937  459056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3959802.pem
	I0510 19:27:36.753124  459056 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 10 18:00 /usr/share/ca-certificates/3959802.pem
	I0510 19:27:36.753219  459056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3959802.pem
	I0510 19:27:36.763162  459056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3959802.pem /etc/ssl/certs/3ec20f2e.0"
	I0510 19:27:36.777980  459056 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0510 19:27:36.784377  459056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0510 19:27:36.792871  459056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0510 19:27:36.801028  459056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0510 19:27:36.809570  459056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0510 19:27:36.820430  459056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0510 19:27:36.830234  459056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0510 19:27:36.838492  459056 kubeadm.go:392] StartCluster: {Name:old-k8s-version-089147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-089147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.225 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 19:27:36.838628  459056 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0510 19:27:36.838710  459056 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0510 19:27:36.883637  459056 cri.go:89] found id: ""
	I0510 19:27:36.883721  459056 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0510 19:27:36.898381  459056 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0510 19:27:36.898418  459056 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0510 19:27:36.898479  459056 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0510 19:27:36.911968  459056 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0510 19:27:36.912423  459056 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-089147" does not appear in /home/jenkins/minikube-integration/20720-388787/kubeconfig
	I0510 19:27:36.912622  459056 kubeconfig.go:62] /home/jenkins/minikube-integration/20720-388787/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-089147" cluster setting kubeconfig missing "old-k8s-version-089147" context setting]
	I0510 19:27:36.912933  459056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/kubeconfig: {Name:mk5ad7285fe4c17b2779ea6d5a539f101fe94797 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:27:36.978461  459056 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0510 19:27:36.992010  459056 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.50.225
	I0510 19:27:36.992058  459056 kubeadm.go:1152] stopping kube-system containers ...
	I0510 19:27:36.992090  459056 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0510 19:27:36.992157  459056 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0510 19:27:37.036332  459056 cri.go:89] found id: ""
	I0510 19:27:37.036417  459056 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0510 19:27:37.061304  459056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0510 19:27:37.077360  459056 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0510 19:27:37.077388  459056 kubeadm.go:157] found existing configuration files:
	
	I0510 19:27:37.077447  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0510 19:27:37.091136  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0510 19:27:37.091207  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0510 19:27:37.108190  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0510 19:27:37.122863  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0510 19:27:37.122925  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0510 19:27:37.135581  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0510 19:27:37.151096  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0510 19:27:37.151176  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0510 19:27:37.163976  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0510 19:27:37.176297  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0510 19:27:37.176382  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0510 19:27:37.189484  459056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0510 19:27:37.202907  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:27:37.370636  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:27:38.101468  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:27:38.357025  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:27:38.472109  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:27:38.566036  459056 api_server.go:52] waiting for apiserver process to appear ...
	I0510 19:27:38.566163  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:40.067566  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:40.068079  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:40.068151  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:40.068067  459321 retry.go:31] will retry after 3.236923309s: waiting for domain to come up
	I0510 19:27:43.308549  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:43.309080  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:43.309112  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:43.309038  459321 retry.go:31] will retry after 2.981327362s: waiting for domain to come up
	I0510 19:27:39.066944  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:39.566854  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:40.067066  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:40.567198  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:41.066452  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:41.566381  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:42.066951  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:42.567170  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:43.067308  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:43.566541  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:46.293587  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:46.294125  459268 main.go:141] libmachine: (embed-certs-483140) DBG | unable to find current IP address of domain embed-certs-483140 in network mk-embed-certs-483140
	I0510 19:27:46.294169  459268 main.go:141] libmachine: (embed-certs-483140) DBG | I0510 19:27:46.294106  459321 retry.go:31] will retry after 3.49595936s: waiting for domain to come up
	I0510 19:27:44.067005  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:44.566869  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:45.066432  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:45.567107  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:46.066205  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:46.566600  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:47.066806  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:47.567316  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:48.067123  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:48.566636  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:49.792274  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:49.792796  459268 main.go:141] libmachine: (embed-certs-483140) found domain IP: 192.168.72.231
	I0510 19:27:49.792820  459268 main.go:141] libmachine: (embed-certs-483140) reserving static IP address...
	I0510 19:27:49.792830  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has current primary IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:49.793260  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "embed-certs-483140", mac: "52:54:00:2c:f8:9f", ip: "192.168.72.231"} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:49.793283  459268 main.go:141] libmachine: (embed-certs-483140) reserved static IP address 192.168.72.231 for domain embed-certs-483140
	I0510 19:27:49.793301  459268 main.go:141] libmachine: (embed-certs-483140) DBG | skip adding static IP to network mk-embed-certs-483140 - found existing host DHCP lease matching {name: "embed-certs-483140", mac: "52:54:00:2c:f8:9f", ip: "192.168.72.231"}
	I0510 19:27:49.793315  459268 main.go:141] libmachine: (embed-certs-483140) DBG | Getting to WaitForSSH function...
	I0510 19:27:49.793330  459268 main.go:141] libmachine: (embed-certs-483140) waiting for SSH...
	I0510 19:27:49.795680  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:49.796092  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:49.796115  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:49.796237  459268 main.go:141] libmachine: (embed-certs-483140) DBG | Using SSH client type: external
	I0510 19:27:49.796292  459268 main.go:141] libmachine: (embed-certs-483140) DBG | Using SSH private key: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/embed-certs-483140/id_rsa (-rw-------)
	I0510 19:27:49.796323  459268 main.go:141] libmachine: (embed-certs-483140) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.231 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20720-388787/.minikube/machines/embed-certs-483140/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0510 19:27:49.796357  459268 main.go:141] libmachine: (embed-certs-483140) DBG | About to run SSH command:
	I0510 19:27:49.796369  459268 main.go:141] libmachine: (embed-certs-483140) DBG | exit 0
	I0510 19:27:49.923834  459268 main.go:141] libmachine: (embed-certs-483140) DBG | SSH cmd err, output: <nil>: 
	I0510 19:27:49.924265  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetConfigRaw
	I0510 19:27:49.924904  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetIP
	I0510 19:27:49.928115  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:49.928557  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:49.928589  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:49.928844  459268 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/embed-certs-483140/config.json ...
	I0510 19:27:49.929086  459268 machine.go:93] provisionDockerMachine start ...
	I0510 19:27:49.929120  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	I0510 19:27:49.929435  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:27:49.931867  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:49.932242  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:49.932278  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:49.932387  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHPort
	I0510 19:27:49.932602  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:49.932748  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:49.932878  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHUsername
	I0510 19:27:49.933115  459268 main.go:141] libmachine: Using SSH client type: native
	I0510 19:27:49.933388  459268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.72.231 22 <nil> <nil>}
	I0510 19:27:49.933401  459268 main.go:141] libmachine: About to run SSH command:
	hostname
	I0510 19:27:50.044168  459268 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0510 19:27:50.044204  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetMachineName
	I0510 19:27:50.044481  459268 buildroot.go:166] provisioning hostname "embed-certs-483140"
	I0510 19:27:50.044509  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetMachineName
	I0510 19:27:50.044693  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:27:50.047840  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:50.048210  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:50.048232  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:50.048417  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHPort
	I0510 19:27:50.048632  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:50.048790  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:50.048942  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHUsername
	I0510 19:27:50.049085  459268 main.go:141] libmachine: Using SSH client type: native
	I0510 19:27:50.049295  459268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.72.231 22 <nil> <nil>}
	I0510 19:27:50.049308  459268 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-483140 && echo "embed-certs-483140" | sudo tee /etc/hostname
	I0510 19:27:50.174048  459268 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-483140
	
	I0510 19:27:50.174083  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:27:50.177045  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:50.177447  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:50.177480  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:50.177653  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHPort
	I0510 19:27:50.177869  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:50.178002  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:50.178154  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHUsername
	I0510 19:27:50.178284  459268 main.go:141] libmachine: Using SSH client type: native
	I0510 19:27:50.178498  459268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.72.231 22 <nil> <nil>}
	I0510 19:27:50.178514  459268 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-483140' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-483140/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-483140' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0510 19:27:50.298589  459268 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0510 19:27:50.298629  459268 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20720-388787/.minikube CaCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20720-388787/.minikube}
	I0510 19:27:50.298678  459268 buildroot.go:174] setting up certificates
	I0510 19:27:50.298688  459268 provision.go:84] configureAuth start
	I0510 19:27:50.298698  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetMachineName
	I0510 19:27:50.299119  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetIP
	I0510 19:27:50.301907  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:50.302237  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:50.302256  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:50.302394  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:27:50.305191  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:50.305523  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:50.305545  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:50.305718  459268 provision.go:143] copyHostCerts
	I0510 19:27:50.305792  459268 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem, removing ...
	I0510 19:27:50.305807  459268 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem
	I0510 19:27:50.305860  459268 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/ca.pem (1078 bytes)
	I0510 19:27:50.305962  459268 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem, removing ...
	I0510 19:27:50.305970  459268 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem
	I0510 19:27:50.306000  459268 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/cert.pem (1123 bytes)
	I0510 19:27:50.306073  459268 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem, removing ...
	I0510 19:27:50.306087  459268 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem
	I0510 19:27:50.306105  459268 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20720-388787/.minikube/key.pem (1675 bytes)
	I0510 19:27:50.306169  459268 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem org=jenkins.embed-certs-483140 san=[127.0.0.1 192.168.72.231 embed-certs-483140 localhost minikube]
	I0510 19:27:50.615586  459268 provision.go:177] copyRemoteCerts
	I0510 19:27:50.615663  459268 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0510 19:27:50.615691  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:27:50.618693  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:50.619094  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:50.619124  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:50.619296  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHPort
	I0510 19:27:50.619467  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:50.619613  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHUsername
	I0510 19:27:50.619728  459268 sshutil.go:53] new ssh client: &{IP:192.168.72.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/embed-certs-483140/id_rsa Username:docker}
	I0510 19:27:50.709319  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0510 19:27:50.739864  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0510 19:27:50.769743  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0510 19:27:50.799032  459268 provision.go:87] duration metric: took 500.330996ms to configureAuth
	I0510 19:27:50.799064  459268 buildroot.go:189] setting minikube options for container-runtime
	I0510 19:27:50.799354  459268 config.go:182] Loaded profile config "embed-certs-483140": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 19:27:50.799434  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:27:50.802338  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:50.802753  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:50.802796  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:50.802915  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHPort
	I0510 19:27:50.803096  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:50.803296  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:50.803423  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHUsername
	I0510 19:27:50.803591  459268 main.go:141] libmachine: Using SSH client type: native
	I0510 19:27:50.803807  459268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.72.231 22 <nil> <nil>}
	I0510 19:27:50.803830  459268 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0510 19:27:51.055936  459268 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0510 19:27:51.055969  459268 machine.go:96] duration metric: took 1.126866865s to provisionDockerMachine
	I0510 19:27:51.055989  459268 start.go:293] postStartSetup for "embed-certs-483140" (driver="kvm2")
	I0510 19:27:51.056002  459268 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0510 19:27:51.056026  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	I0510 19:27:51.056453  459268 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0510 19:27:51.056494  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:27:51.059782  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:51.060458  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:51.060503  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:51.060671  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHPort
	I0510 19:27:51.061017  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:51.061277  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHUsername
	I0510 19:27:51.061481  459268 sshutil.go:53] new ssh client: &{IP:192.168.72.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/embed-certs-483140/id_rsa Username:docker}
	I0510 19:27:51.153337  459268 ssh_runner.go:195] Run: cat /etc/os-release
	I0510 19:27:51.158738  459268 info.go:137] Remote host: Buildroot 2024.11.2
	I0510 19:27:51.158782  459268 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-388787/.minikube/addons for local assets ...
	I0510 19:27:51.158876  459268 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-388787/.minikube/files for local assets ...
	I0510 19:27:51.158982  459268 filesync.go:149] local asset: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem -> 3959802.pem in /etc/ssl/certs
	I0510 19:27:51.159078  459268 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0510 19:27:51.171765  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem --> /etc/ssl/certs/3959802.pem (1708 bytes)
	I0510 19:27:51.204973  459268 start.go:296] duration metric: took 148.937348ms for postStartSetup
	I0510 19:27:51.205024  459268 fix.go:56] duration metric: took 22.803970548s for fixHost
	I0510 19:27:51.205051  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:27:51.208258  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:51.208723  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:51.208748  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:51.208995  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHPort
	I0510 19:27:51.209219  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:51.209421  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:51.209566  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHUsername
	I0510 19:27:51.209735  459268 main.go:141] libmachine: Using SSH client type: native
	I0510 19:27:51.209940  459268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.72.231 22 <nil> <nil>}
	I0510 19:27:51.209947  459268 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0510 19:27:51.320755  459268 main.go:141] libmachine: SSH cmd err, output: <nil>: 1746905271.291613089
	
	I0510 19:27:51.320787  459268 fix.go:216] guest clock: 1746905271.291613089
	I0510 19:27:51.320798  459268 fix.go:229] Guest: 2025-05-10 19:27:51.291613089 +0000 UTC Remote: 2025-05-10 19:27:51.20502902 +0000 UTC m=+27.360293338 (delta=86.584069ms)
	I0510 19:27:51.320828  459268 fix.go:200] guest clock delta is within tolerance: 86.584069ms
	I0510 19:27:51.320835  459268 start.go:83] releasing machines lock for "embed-certs-483140", held for 22.919808938s
	I0510 19:27:51.320863  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	I0510 19:27:51.321154  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetIP
	I0510 19:27:51.324081  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:51.324459  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:51.324483  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:51.324692  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	I0510 19:27:51.325214  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	I0510 19:27:51.325408  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	I0510 19:27:51.325548  459268 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0510 19:27:51.325594  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:27:51.325646  459268 ssh_runner.go:195] Run: cat /version.json
	I0510 19:27:51.325681  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:27:51.328440  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:51.328753  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:51.328794  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:51.328818  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:51.329002  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHPort
	I0510 19:27:51.329194  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:51.329232  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:51.329255  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:51.329376  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHUsername
	I0510 19:27:51.329402  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHPort
	I0510 19:27:51.329568  459268 sshutil.go:53] new ssh client: &{IP:192.168.72.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/embed-certs-483140/id_rsa Username:docker}
	I0510 19:27:51.329584  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:27:51.329733  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHUsername
	I0510 19:27:51.329873  459268 sshutil.go:53] new ssh client: &{IP:192.168.72.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/embed-certs-483140/id_rsa Username:docker}
	I0510 19:27:51.446190  459268 ssh_runner.go:195] Run: systemctl --version
	I0510 19:27:51.452760  459268 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0510 19:27:51.607666  459268 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0510 19:27:51.616239  459268 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0510 19:27:51.616317  459268 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0510 19:27:51.636571  459268 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0510 19:27:51.636605  459268 start.go:495] detecting cgroup driver to use...
	I0510 19:27:51.636667  459268 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0510 19:27:51.657444  459268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0510 19:27:51.676562  459268 docker.go:225] disabling cri-docker service (if available) ...
	I0510 19:27:51.676630  459268 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0510 19:27:51.694731  459268 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0510 19:27:51.712216  459268 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0510 19:27:51.876386  459268 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0510 19:27:52.020882  459268 docker.go:241] disabling docker service ...
	I0510 19:27:52.020959  459268 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0510 19:27:52.037031  459268 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0510 19:27:52.051939  459268 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0510 19:27:52.242011  459268 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0510 19:27:52.396595  459268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0510 19:27:52.412573  459268 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0510 19:27:52.436314  459268 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0510 19:27:52.436382  459268 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:27:52.448707  459268 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0510 19:27:52.448775  459268 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:27:52.460614  459268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:27:52.472822  459268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:27:52.484913  459268 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0510 19:27:52.497971  459268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:27:52.511526  459268 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:27:52.533115  459268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 19:27:52.545947  459268 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0510 19:27:52.556778  459268 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0510 19:27:52.556857  459268 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0510 19:27:52.573550  459268 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0510 19:27:52.589299  459268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 19:27:52.732786  459268 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0510 19:27:52.860039  459268 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0510 19:27:52.860135  459268 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0510 19:27:52.865273  459268 start.go:563] Will wait 60s for crictl version
	I0510 19:27:52.865329  459268 ssh_runner.go:195] Run: which crictl
	I0510 19:27:52.869469  459268 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0510 19:27:52.910450  459268 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0510 19:27:52.910548  459268 ssh_runner.go:195] Run: crio --version
	I0510 19:27:52.940082  459268 ssh_runner.go:195] Run: crio --version
	I0510 19:27:52.972063  459268 out.go:177] * Preparing Kubernetes v1.33.0 on CRI-O 1.29.1 ...
	I0510 19:27:52.973307  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetIP
	I0510 19:27:52.976415  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:52.976789  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:27:52.976816  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:27:52.977066  459268 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0510 19:27:52.981433  459268 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 19:27:52.995881  459268 kubeadm.go:875] updating cluster {Name:embed-certs-483140 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.33.0 ClusterName:embed-certs-483140 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.231 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNode
Requested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0510 19:27:52.995991  459268 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
	I0510 19:27:52.996030  459268 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 19:27:53.034258  459268 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.33.0". assuming images are not preloaded.
	I0510 19:27:53.034325  459268 ssh_runner.go:195] Run: which lz4
	I0510 19:27:53.038628  459268 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0510 19:27:53.043283  459268 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0510 19:27:53.043322  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (413217622 bytes)
	I0510 19:27:49.067037  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:49.566942  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:50.066669  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:50.566620  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:51.066533  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:51.567303  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:52.066558  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:52.567193  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:53.066234  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:53.567160  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:54.704270  459268 crio.go:462] duration metric: took 1.665684843s to copy over tarball
	I0510 19:27:54.704390  459268 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0510 19:27:56.898604  459268 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.19418195s)
	I0510 19:27:56.898641  459268 crio.go:469] duration metric: took 2.194331535s to extract the tarball
	I0510 19:27:56.898653  459268 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0510 19:27:56.939194  459268 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 19:27:56.988274  459268 crio.go:514] all images are preloaded for cri-o runtime.
	I0510 19:27:56.988305  459268 cache_images.go:84] Images are preloaded, skipping loading
	I0510 19:27:56.988315  459268 kubeadm.go:926] updating node { 192.168.72.231 8443 v1.33.0 crio true true} ...
	I0510 19:27:56.988421  459268 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-483140 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.231
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.0 ClusterName:embed-certs-483140 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0510 19:27:56.988518  459268 ssh_runner.go:195] Run: crio config
	I0510 19:27:57.044585  459268 cni.go:84] Creating CNI manager for ""
	I0510 19:27:57.044616  459268 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0510 19:27:57.044632  459268 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0510 19:27:57.044674  459268 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.231 APIServerPort:8443 KubernetesVersion:v1.33.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-483140 NodeName:embed-certs-483140 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.231"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.231 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0510 19:27:57.044833  459268 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.231
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-483140"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.231"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.231"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0510 19:27:57.044929  459268 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.0
	I0510 19:27:57.057883  459268 binaries.go:44] Found k8s binaries, skipping transfer
	I0510 19:27:57.057964  459268 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0510 19:27:57.070669  459268 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0510 19:27:57.096191  459268 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0510 19:27:57.120219  459268 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I0510 19:27:57.143282  459268 ssh_runner.go:195] Run: grep 192.168.72.231	control-plane.minikube.internal$ /etc/hosts
	I0510 19:27:57.148049  459268 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.231	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 19:27:57.164188  459268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 19:27:57.307271  459268 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 19:27:57.342355  459268 certs.go:68] Setting up /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/embed-certs-483140 for IP: 192.168.72.231
	I0510 19:27:57.342381  459268 certs.go:194] generating shared ca certs ...
	I0510 19:27:57.342405  459268 certs.go:226] acquiring lock for ca certs: {Name:mk8db74782205da4ac57ef815dd495cda255251a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:27:57.342591  459268 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20720-388787/.minikube/ca.key
	I0510 19:27:57.342680  459268 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.key
	I0510 19:27:57.342697  459268 certs.go:256] generating profile certs ...
	I0510 19:27:57.342827  459268 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/embed-certs-483140/client.key
	I0510 19:27:57.342886  459268 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/embed-certs-483140/apiserver.key.027a75a8
	I0510 19:27:57.342922  459268 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/embed-certs-483140/proxy-client.key
	I0510 19:27:57.343035  459268 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980.pem (1338 bytes)
	W0510 19:27:57.343078  459268 certs.go:480] ignoring /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980_empty.pem, impossibly tiny 0 bytes
	I0510 19:27:57.343092  459268 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca-key.pem (1679 bytes)
	I0510 19:27:57.343124  459268 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/ca.pem (1078 bytes)
	I0510 19:27:57.343154  459268 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/cert.pem (1123 bytes)
	I0510 19:27:57.343196  459268 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/certs/key.pem (1675 bytes)
	I0510 19:27:57.343281  459268 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem (1708 bytes)
	I0510 19:27:57.343973  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0510 19:27:57.378887  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0510 19:27:57.420451  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0510 19:27:57.457206  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0510 19:27:57.499641  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/embed-certs-483140/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0510 19:27:57.534055  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/embed-certs-483140/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0510 19:27:57.564979  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/embed-certs-483140/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0510 19:27:57.601743  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/embed-certs-483140/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0510 19:27:57.633117  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/certs/395980.pem --> /usr/share/ca-certificates/395980.pem (1338 bytes)
	I0510 19:27:57.664410  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/ssl/certs/3959802.pem --> /usr/share/ca-certificates/3959802.pem (1708 bytes)
	I0510 19:27:57.693525  459268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-388787/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0510 19:27:57.723750  459268 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0510 19:27:57.745486  459268 ssh_runner.go:195] Run: openssl version
	I0510 19:27:57.752288  459268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/395980.pem && ln -fs /usr/share/ca-certificates/395980.pem /etc/ssl/certs/395980.pem"
	I0510 19:27:57.766087  459268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/395980.pem
	I0510 19:27:57.771459  459268 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 10 18:00 /usr/share/ca-certificates/395980.pem
	I0510 19:27:57.771521  459268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/395980.pem
	I0510 19:27:57.778642  459268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/395980.pem /etc/ssl/certs/51391683.0"
	I0510 19:27:57.792251  459268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3959802.pem && ln -fs /usr/share/ca-certificates/3959802.pem /etc/ssl/certs/3959802.pem"
	I0510 19:27:57.806097  459268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3959802.pem
	I0510 19:27:57.811543  459268 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 10 18:00 /usr/share/ca-certificates/3959802.pem
	I0510 19:27:57.811613  459268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3959802.pem
	I0510 19:27:57.818894  459268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3959802.pem /etc/ssl/certs/3ec20f2e.0"
	I0510 19:27:57.833637  459268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0510 19:27:57.848084  459268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0510 19:27:57.853506  459268 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 10 17:52 /usr/share/ca-certificates/minikubeCA.pem
	I0510 19:27:57.853569  459268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0510 19:27:57.861284  459268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0510 19:27:57.875248  459268 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0510 19:27:57.881000  459268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0510 19:27:57.889239  459268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0510 19:27:57.898408  459268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0510 19:27:57.907154  459268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0510 19:27:57.915654  459268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0510 19:27:57.924501  459268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0510 19:27:57.932344  459268 kubeadm.go:392] StartCluster: {Name:embed-certs-483140 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33
.0 ClusterName:embed-certs-483140 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.231 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 19:27:57.932450  459268 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0510 19:27:57.932515  459268 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0510 19:27:57.977038  459268 cri.go:89] found id: ""
	I0510 19:27:57.977121  459268 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0510 19:27:57.988821  459268 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0510 19:27:57.988856  459268 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0510 19:27:57.988917  459268 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0510 19:27:58.000862  459268 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0510 19:27:58.001626  459268 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-483140" does not appear in /home/jenkins/minikube-integration/20720-388787/kubeconfig
	I0510 19:27:58.001911  459268 kubeconfig.go:62] /home/jenkins/minikube-integration/20720-388787/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-483140" cluster setting kubeconfig missing "embed-certs-483140" context setting]
	I0510 19:27:58.002463  459268 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/kubeconfig: {Name:mk5ad7285fe4c17b2779ea6d5a539f101fe94797 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:27:58.012994  459268 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0510 19:27:58.026138  459268 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.72.231
	I0510 19:27:58.026178  459268 kubeadm.go:1152] stopping kube-system containers ...
	I0510 19:27:58.026192  459268 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0510 19:27:58.026251  459268 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0510 19:27:58.069294  459268 cri.go:89] found id: ""
	I0510 19:27:58.069376  459268 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0510 19:27:58.089295  459268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0510 19:27:58.101786  459268 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0510 19:27:58.101807  459268 kubeadm.go:157] found existing configuration files:
	
	I0510 19:27:58.101851  459268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0510 19:27:58.112987  459268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0510 19:27:58.113053  459268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0510 19:27:58.125239  459268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0510 19:27:58.137764  459268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0510 19:27:58.137828  459268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0510 19:27:58.150429  459268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0510 19:27:58.163051  459268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0510 19:27:58.163137  459268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0510 19:27:58.175159  459268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0510 19:27:58.186717  459268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0510 19:27:58.186792  459268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0510 19:27:58.200405  459268 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0510 19:27:58.214273  459268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:27:58.343615  459268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:27:54.066832  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:54.567225  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:55.067095  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:55.567141  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:56.066981  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:56.566711  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:57.066205  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:57.566404  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:58.067102  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:58.566428  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:59.367696  459268 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.024040496s)
	I0510 19:27:59.367731  459268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:27:59.640666  459268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:27:59.716214  459268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:27:59.797846  459268 api_server.go:52] waiting for apiserver process to appear ...
	I0510 19:27:59.797921  459268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:00.298404  459268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:00.798112  459268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:00.834727  459268 api_server.go:72] duration metric: took 1.036892245s to wait for apiserver process to appear ...
	I0510 19:28:00.834760  459268 api_server.go:88] waiting for apiserver healthz status ...
	I0510 19:28:00.834784  459268 api_server.go:253] Checking apiserver healthz at https://192.168.72.231:8443/healthz ...
	I0510 19:28:00.835339  459268 api_server.go:269] stopped: https://192.168.72.231:8443/healthz: Get "https://192.168.72.231:8443/healthz": dial tcp 192.168.72.231:8443: connect: connection refused
	I0510 19:28:01.334998  459268 api_server.go:253] Checking apiserver healthz at https://192.168.72.231:8443/healthz ...
	I0510 19:27:59.066475  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:27:59.567069  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:00.066988  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:00.566888  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:01.066769  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:01.566741  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:02.066555  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:02.566338  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:03.066492  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:03.567302  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:03.904035  459268 api_server.go:279] https://192.168.72.231:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0510 19:28:03.904079  459268 api_server.go:103] status: https://192.168.72.231:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0510 19:28:03.904097  459268 api_server.go:253] Checking apiserver healthz at https://192.168.72.231:8443/healthz ...
	I0510 19:28:03.956072  459268 api_server.go:279] https://192.168.72.231:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0510 19:28:03.956108  459268 api_server.go:103] status: https://192.168.72.231:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0510 19:28:04.335740  459268 api_server.go:253] Checking apiserver healthz at https://192.168.72.231:8443/healthz ...
	I0510 19:28:04.341381  459268 api_server.go:279] https://192.168.72.231:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0510 19:28:04.341410  459268 api_server.go:103] status: https://192.168.72.231:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0510 19:28:04.835035  459268 api_server.go:253] Checking apiserver healthz at https://192.168.72.231:8443/healthz ...
	I0510 19:28:04.843795  459268 api_server.go:279] https://192.168.72.231:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0510 19:28:04.843856  459268 api_server.go:103] status: https://192.168.72.231:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0510 19:28:05.335582  459268 api_server.go:253] Checking apiserver healthz at https://192.168.72.231:8443/healthz ...
	I0510 19:28:05.340256  459268 api_server.go:279] https://192.168.72.231:8443/healthz returned 200:
	ok
	I0510 19:28:05.348062  459268 api_server.go:141] control plane version: v1.33.0
	I0510 19:28:05.348092  459268 api_server.go:131] duration metric: took 4.513324632s to wait for apiserver health ...
	I0510 19:28:05.348102  459268 cni.go:84] Creating CNI manager for ""
	I0510 19:28:05.348108  459268 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0510 19:28:05.349901  459268 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0510 19:28:05.351199  459268 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0510 19:28:05.369532  459268 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0510 19:28:05.403896  459268 system_pods.go:43] waiting for kube-system pods to appear ...
	I0510 19:28:05.410420  459268 system_pods.go:59] 8 kube-system pods found
	I0510 19:28:05.410466  459268 system_pods.go:61] "coredns-674b8bbfcf-4ld9c" [2af71141-c2b9-4788-8dcf-19ae78077d83] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0510 19:28:05.410476  459268 system_pods.go:61] "etcd-embed-certs-483140" [18335556-d523-4f93-9975-36c6ec710b8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0510 19:28:05.410484  459268 system_pods.go:61] "kube-apiserver-embed-certs-483140" [ccfb56df-98d8-49bd-af84-4897349b90fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0510 19:28:05.410489  459268 system_pods.go:61] "kube-controller-manager-embed-certs-483140" [3aa74b28-d50d-4a50-b222-38dea567ed3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0510 19:28:05.410494  459268 system_pods.go:61] "kube-proxy-b2gvg" [d17e7a7f-57d3-4fe4-ace9-7a2fc70bb585] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0510 19:28:05.410500  459268 system_pods.go:61] "kube-scheduler-embed-certs-483140" [1eb4348b-46a3-45d6-bd78-d5d9045b600c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0510 19:28:05.410505  459268 system_pods.go:61] "metrics-server-f79f97bbb-dbl7q" [b17e1431-b05d-4d16-8f92-46b9526e09fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0510 19:28:05.410510  459268 system_pods.go:61] "storage-provisioner" [e9b8f9e8-8add-47f3-a9a7-51fae3a958d5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0510 19:28:05.410519  459268 system_pods.go:74] duration metric: took 6.592608ms to wait for pod list to return data ...
	I0510 19:28:05.410530  459268 node_conditions.go:102] verifying NodePressure condition ...
	I0510 19:28:05.415787  459268 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0510 19:28:05.415827  459268 node_conditions.go:123] node cpu capacity is 2
	I0510 19:28:05.415843  459268 node_conditions.go:105] duration metric: took 5.307579ms to run NodePressure ...
	I0510 19:28:05.415868  459268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0510 19:28:05.791590  459268 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0510 19:28:05.795260  459268 kubeadm.go:735] kubelet initialised
	I0510 19:28:05.795284  459268 kubeadm.go:736] duration metric: took 3.665992ms waiting for restarted kubelet to initialise ...
	I0510 19:28:05.795305  459268 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0510 19:28:05.811911  459268 ops.go:34] apiserver oom_adj: -16
	I0510 19:28:05.811945  459268 kubeadm.go:593] duration metric: took 7.823080185s to restartPrimaryControlPlane
	I0510 19:28:05.811959  459268 kubeadm.go:394] duration metric: took 7.879628572s to StartCluster
	I0510 19:28:05.811982  459268 settings.go:142] acquiring lock: {Name:mk4ab6a112c947bfdedd8044017a7c560266fb5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:28:05.812070  459268 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20720-388787/kubeconfig
	I0510 19:28:05.813672  459268 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-388787/kubeconfig: {Name:mk5ad7285fe4c17b2779ea6d5a539f101fe94797 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 19:28:05.814006  459268 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.231 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0510 19:28:05.814204  459268 config.go:182] Loaded profile config "embed-certs-483140": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 19:28:05.814159  459268 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0510 19:28:05.814258  459268 addons.go:69] Setting default-storageclass=true in profile "embed-certs-483140"
	I0510 19:28:05.814274  459268 addons.go:69] Setting dashboard=true in profile "embed-certs-483140"
	I0510 19:28:05.814258  459268 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-483140"
	I0510 19:28:05.814294  459268 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-483140"
	I0510 19:28:05.814286  459268 addons.go:69] Setting metrics-server=true in profile "embed-certs-483140"
	W0510 19:28:05.814306  459268 addons.go:247] addon storage-provisioner should already be in state true
	I0510 19:28:05.814315  459268 addons.go:238] Setting addon metrics-server=true in "embed-certs-483140"
	W0510 19:28:05.814323  459268 addons.go:247] addon metrics-server should already be in state true
	I0510 19:28:05.814336  459268 host.go:66] Checking if "embed-certs-483140" exists ...
	I0510 19:28:05.814357  459268 host.go:66] Checking if "embed-certs-483140" exists ...
	I0510 19:28:05.814279  459268 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-483140"
	I0510 19:28:05.814296  459268 addons.go:238] Setting addon dashboard=true in "embed-certs-483140"
	W0510 19:28:05.814480  459268 addons.go:247] addon dashboard should already be in state true
	I0510 19:28:05.814522  459268 host.go:66] Checking if "embed-certs-483140" exists ...
	I0510 19:28:05.814752  459268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:28:05.814784  459268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:28:05.814801  459268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:28:05.814812  459268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:28:05.814858  459268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:28:05.814903  459268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:28:05.814860  459268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:28:05.815049  459268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:28:05.815493  459268 out.go:177] * Verifying Kubernetes components...
	I0510 19:28:05.816761  459268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 19:28:05.832190  459268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34921
	I0510 19:28:05.833019  459268 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:28:05.833618  459268 main.go:141] libmachine: Using API Version  1
	I0510 19:28:05.833652  459268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:28:05.834069  459268 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:28:05.834652  459268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:28:05.834698  459268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:28:05.835356  459268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39063
	I0510 19:28:05.835412  459268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36825
	I0510 19:28:05.835824  459268 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:28:05.835909  459268 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:28:05.836388  459268 main.go:141] libmachine: Using API Version  1
	I0510 19:28:05.836411  459268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:28:05.836524  459268 main.go:141] libmachine: Using API Version  1
	I0510 19:28:05.836544  459268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:28:05.836805  459268 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:28:05.836925  459268 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:28:05.837086  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetState
	I0510 19:28:05.837502  459268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:28:05.837542  459268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:28:05.837861  459268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45851
	I0510 19:28:05.838446  459268 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:28:05.838949  459268 main.go:141] libmachine: Using API Version  1
	I0510 19:28:05.838974  459268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:28:05.839356  459268 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:28:05.840781  459268 addons.go:238] Setting addon default-storageclass=true in "embed-certs-483140"
	W0510 19:28:05.840809  459268 addons.go:247] addon default-storageclass should already be in state true
	I0510 19:28:05.840843  459268 host.go:66] Checking if "embed-certs-483140" exists ...
	I0510 19:28:05.841225  459268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:28:05.841283  459268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:28:05.841904  459268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:28:05.841957  459268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:28:05.855806  459268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38611
	I0510 19:28:05.856498  459268 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:28:05.857301  459268 main.go:141] libmachine: Using API Version  1
	I0510 19:28:05.857333  459268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:28:05.857754  459268 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:28:05.857831  459268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39121
	I0510 19:28:05.857977  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetState
	I0510 19:28:05.858290  459268 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:28:05.858779  459268 main.go:141] libmachine: Using API Version  1
	I0510 19:28:05.858803  459268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:28:05.858874  459268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38033
	I0510 19:28:05.859327  459268 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:28:05.859538  459268 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:28:05.859968  459268 main.go:141] libmachine: Using API Version  1
	I0510 19:28:05.859992  459268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:28:05.860232  459268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:28:05.860241  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	I0510 19:28:05.860273  459268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:28:05.860355  459268 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:28:05.860496  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetState
	I0510 19:28:05.862204  459268 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0510 19:28:05.862302  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	I0510 19:28:05.863409  459268 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0510 19:28:05.863496  459268 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0510 19:28:05.863512  459268 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0510 19:28:05.863528  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:28:05.864433  459268 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0510 19:28:05.864458  459268 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0510 19:28:05.864480  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:28:05.867368  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:28:05.867845  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:28:05.867993  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:28:05.868025  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:28:05.868296  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHPort
	I0510 19:28:05.868504  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:28:05.868556  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:28:05.868574  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:28:05.868691  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHUsername
	I0510 19:28:05.868814  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHPort
	I0510 19:28:05.868850  459268 sshutil.go:53] new ssh client: &{IP:192.168.72.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/embed-certs-483140/id_rsa Username:docker}
	I0510 19:28:05.868996  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:28:05.869204  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHUsername
	I0510 19:28:05.869389  459268 sshutil.go:53] new ssh client: &{IP:192.168.72.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/embed-certs-483140/id_rsa Username:docker}
	I0510 19:28:05.883698  459268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46855
	I0510 19:28:05.884370  459268 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:28:05.884927  459268 main.go:141] libmachine: Using API Version  1
	I0510 19:28:05.884961  459268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:28:05.885393  459268 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:28:05.885620  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetState
	I0510 19:28:05.887679  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	I0510 19:28:05.889699  459268 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0510 19:28:05.889946  459268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35865
	I0510 19:28:05.890351  459268 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:28:05.890843  459268 main.go:141] libmachine: Using API Version  1
	I0510 19:28:05.890898  459268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:28:05.891281  459268 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:28:05.891485  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetState
	I0510 19:28:05.891961  459268 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0510 19:28:05.893147  459268 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0510 19:28:05.893168  459268 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0510 19:28:05.893173  459268 main.go:141] libmachine: (embed-certs-483140) Calling .DriverName
	I0510 19:28:05.893192  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:28:05.893397  459268 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0510 19:28:05.893412  459268 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0510 19:28:05.893429  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHHostname
	I0510 19:28:05.897062  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:28:05.897408  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:28:05.897473  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:28:05.897574  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:28:05.897702  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHPort
	I0510 19:28:05.897846  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:28:05.897995  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHUsername
	I0510 19:28:05.898008  459268 main.go:141] libmachine: (embed-certs-483140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:f8:9f", ip: ""} in network mk-embed-certs-483140: {Iface:virbr4 ExpiryTime:2025-05-10 20:27:41 +0000 UTC Type:0 Mac:52:54:00:2c:f8:9f Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:embed-certs-483140 Clientid:01:52:54:00:2c:f8:9f}
	I0510 19:28:05.898040  459268 main.go:141] libmachine: (embed-certs-483140) DBG | domain embed-certs-483140 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:f8:9f in network mk-embed-certs-483140
	I0510 19:28:05.898173  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHPort
	I0510 19:28:05.898163  459268 sshutil.go:53] new ssh client: &{IP:192.168.72.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/embed-certs-483140/id_rsa Username:docker}
	I0510 19:28:05.898334  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHKeyPath
	I0510 19:28:05.898489  459268 main.go:141] libmachine: (embed-certs-483140) Calling .GetSSHUsername
	I0510 19:28:05.898590  459268 sshutil.go:53] new ssh client: &{IP:192.168.72.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/embed-certs-483140/id_rsa Username:docker}
	I0510 19:28:06.110607  459268 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 19:28:06.144859  459268 node_ready.go:35] waiting up to 6m0s for node "embed-certs-483140" to be "Ready" ...
	I0510 19:28:06.150324  459268 node_ready.go:49] node "embed-certs-483140" is "Ready"
	I0510 19:28:06.150351  459268 node_ready.go:38] duration metric: took 5.421565ms for node "embed-certs-483140" to be "Ready" ...
	I0510 19:28:06.150364  459268 api_server.go:52] waiting for apiserver process to appear ...
	I0510 19:28:06.150417  459268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:06.172762  459268 api_server.go:72] duration metric: took 358.714749ms to wait for apiserver process to appear ...
	I0510 19:28:06.172794  459268 api_server.go:88] waiting for apiserver healthz status ...
	I0510 19:28:06.172815  459268 api_server.go:253] Checking apiserver healthz at https://192.168.72.231:8443/healthz ...
	I0510 19:28:06.181737  459268 api_server.go:279] https://192.168.72.231:8443/healthz returned 200:
	ok
	I0510 19:28:06.183824  459268 api_server.go:141] control plane version: v1.33.0
	I0510 19:28:06.183848  459268 api_server.go:131] duration metric: took 11.047783ms to wait for apiserver health ...
	I0510 19:28:06.183857  459268 system_pods.go:43] waiting for kube-system pods to appear ...
	I0510 19:28:06.188111  459268 system_pods.go:59] 8 kube-system pods found
	I0510 19:28:06.188145  459268 system_pods.go:61] "coredns-674b8bbfcf-4ld9c" [2af71141-c2b9-4788-8dcf-19ae78077d83] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0510 19:28:06.188156  459268 system_pods.go:61] "etcd-embed-certs-483140" [18335556-d523-4f93-9975-36c6ec710b8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0510 19:28:06.188168  459268 system_pods.go:61] "kube-apiserver-embed-certs-483140" [ccfb56df-98d8-49bd-af84-4897349b90fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0510 19:28:06.188177  459268 system_pods.go:61] "kube-controller-manager-embed-certs-483140" [3aa74b28-d50d-4a50-b222-38dea567ed3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0510 19:28:06.188184  459268 system_pods.go:61] "kube-proxy-b2gvg" [d17e7a7f-57d3-4fe4-ace9-7a2fc70bb585] Running
	I0510 19:28:06.188195  459268 system_pods.go:61] "kube-scheduler-embed-certs-483140" [1eb4348b-46a3-45d6-bd78-d5d9045b600c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0510 19:28:06.188214  459268 system_pods.go:61] "metrics-server-f79f97bbb-dbl7q" [b17e1431-b05d-4d16-8f92-46b9526e09fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0510 19:28:06.188220  459268 system_pods.go:61] "storage-provisioner" [e9b8f9e8-8add-47f3-a9a7-51fae3a958d5] Running
	I0510 19:28:06.188231  459268 system_pods.go:74] duration metric: took 4.368046ms to wait for pod list to return data ...
	I0510 19:28:06.188242  459268 default_sa.go:34] waiting for default service account to be created ...
	I0510 19:28:06.193811  459268 default_sa.go:45] found service account: "default"
	I0510 19:28:06.193846  459268 default_sa.go:55] duration metric: took 5.591706ms for default service account to be created ...
	I0510 19:28:06.193860  459268 system_pods.go:116] waiting for k8s-apps to be running ...
	I0510 19:28:06.200177  459268 system_pods.go:86] 8 kube-system pods found
	I0510 19:28:06.200220  459268 system_pods.go:89] "coredns-674b8bbfcf-4ld9c" [2af71141-c2b9-4788-8dcf-19ae78077d83] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0510 19:28:06.200233  459268 system_pods.go:89] "etcd-embed-certs-483140" [18335556-d523-4f93-9975-36c6ec710b8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0510 19:28:06.200244  459268 system_pods.go:89] "kube-apiserver-embed-certs-483140" [ccfb56df-98d8-49bd-af84-4897349b90fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0510 19:28:06.200254  459268 system_pods.go:89] "kube-controller-manager-embed-certs-483140" [3aa74b28-d50d-4a50-b222-38dea567ed3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0510 19:28:06.200260  459268 system_pods.go:89] "kube-proxy-b2gvg" [d17e7a7f-57d3-4fe4-ace9-7a2fc70bb585] Running
	I0510 19:28:06.200268  459268 system_pods.go:89] "kube-scheduler-embed-certs-483140" [1eb4348b-46a3-45d6-bd78-d5d9045b600c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0510 19:28:06.200276  459268 system_pods.go:89] "metrics-server-f79f97bbb-dbl7q" [b17e1431-b05d-4d16-8f92-46b9526e09fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0510 19:28:06.200282  459268 system_pods.go:89] "storage-provisioner" [e9b8f9e8-8add-47f3-a9a7-51fae3a958d5] Running
	I0510 19:28:06.200291  459268 system_pods.go:126] duration metric: took 6.423763ms to wait for k8s-apps to be running ...
	I0510 19:28:06.200300  459268 system_svc.go:44] waiting for kubelet service to be running ....
	I0510 19:28:06.200370  459268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0510 19:28:06.223314  459268 system_svc.go:56] duration metric: took 22.998023ms WaitForService to wait for kubelet
	I0510 19:28:06.223354  459268 kubeadm.go:578] duration metric: took 409.308651ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0510 19:28:06.223387  459268 node_conditions.go:102] verifying NodePressure condition ...
	I0510 19:28:06.232818  459268 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0510 19:28:06.232856  459268 node_conditions.go:123] node cpu capacity is 2
	I0510 19:28:06.232872  459268 node_conditions.go:105] duration metric: took 9.479043ms to run NodePressure ...
	I0510 19:28:06.232902  459268 start.go:241] waiting for startup goroutines ...
	I0510 19:28:06.266649  459268 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0510 19:28:06.266685  459268 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0510 19:28:06.302650  459268 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0510 19:28:06.334925  459268 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0510 19:28:06.334968  459268 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0510 19:28:06.361227  459268 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0510 19:28:06.415256  459268 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0510 19:28:06.415296  459268 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0510 19:28:06.419004  459268 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0510 19:28:06.419036  459268 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0510 19:28:06.550056  459268 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0510 19:28:06.550095  459268 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0510 19:28:06.551403  459268 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0510 19:28:06.551436  459268 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0510 19:28:06.652695  459268 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0510 19:28:06.652723  459268 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0510 19:28:06.732300  459268 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0510 19:28:06.732329  459268 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0510 19:28:06.812826  459268 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0510 19:28:06.812859  459268 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0510 19:28:06.814831  459268 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0510 19:28:06.941859  459268 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0510 19:28:06.941910  459268 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0510 19:28:07.112650  459268 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0510 19:28:07.112683  459268 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0510 19:28:07.230569  459268 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0510 19:28:07.230606  459268 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0510 19:28:07.348026  459268 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0510 19:28:08.311112  459268 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.008411221s)
	I0510 19:28:08.311190  459268 main.go:141] libmachine: Making call to close driver server
	I0510 19:28:08.311196  459268 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.949932076s)
	I0510 19:28:08.311207  459268 main.go:141] libmachine: (embed-certs-483140) Calling .Close
	I0510 19:28:08.311253  459268 main.go:141] libmachine: Making call to close driver server
	I0510 19:28:08.311374  459268 main.go:141] libmachine: (embed-certs-483140) Calling .Close
	I0510 19:28:08.311588  459268 main.go:141] libmachine: Successfully made call to close driver server
	I0510 19:28:08.311605  459268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 19:28:08.311650  459268 main.go:141] libmachine: (embed-certs-483140) DBG | Closing plugin on server side
	I0510 19:28:08.311673  459268 main.go:141] libmachine: Successfully made call to close driver server
	I0510 19:28:08.311684  459268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 19:28:08.311686  459268 main.go:141] libmachine: (embed-certs-483140) DBG | Closing plugin on server side
	I0510 19:28:08.311693  459268 main.go:141] libmachine: Making call to close driver server
	I0510 19:28:08.311701  459268 main.go:141] libmachine: (embed-certs-483140) Calling .Close
	I0510 19:28:08.311749  459268 main.go:141] libmachine: Making call to close driver server
	I0510 19:28:08.311769  459268 main.go:141] libmachine: (embed-certs-483140) Calling .Close
	I0510 19:28:08.311934  459268 main.go:141] libmachine: Successfully made call to close driver server
	I0510 19:28:08.311961  459268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 19:28:08.313225  459268 main.go:141] libmachine: (embed-certs-483140) DBG | Closing plugin on server side
	I0510 19:28:08.313491  459268 main.go:141] libmachine: Successfully made call to close driver server
	I0510 19:28:08.313506  459268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 19:28:08.331318  459268 main.go:141] libmachine: Making call to close driver server
	I0510 19:28:08.331353  459268 main.go:141] libmachine: (embed-certs-483140) Calling .Close
	I0510 19:28:08.331610  459268 main.go:141] libmachine: (embed-certs-483140) DBG | Closing plugin on server side
	I0510 19:28:08.331656  459268 main.go:141] libmachine: Successfully made call to close driver server
	I0510 19:28:08.331664  459268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 19:28:08.561201  459268 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.746324825s)
	I0510 19:28:08.561271  459268 main.go:141] libmachine: Making call to close driver server
	I0510 19:28:08.561285  459268 main.go:141] libmachine: (embed-certs-483140) Calling .Close
	I0510 19:28:08.561649  459268 main.go:141] libmachine: Successfully made call to close driver server
	I0510 19:28:08.561672  459268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 19:28:08.561690  459268 main.go:141] libmachine: Making call to close driver server
	I0510 19:28:08.561698  459268 main.go:141] libmachine: (embed-certs-483140) Calling .Close
	I0510 19:28:08.562030  459268 main.go:141] libmachine: (embed-certs-483140) DBG | Closing plugin on server side
	I0510 19:28:08.562077  459268 main.go:141] libmachine: Successfully made call to close driver server
	I0510 19:28:08.562088  459268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 19:28:08.562103  459268 addons.go:479] Verifying addon metrics-server=true in "embed-certs-483140"
	I0510 19:28:04.066752  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:04.567029  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:05.066242  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:05.567101  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:06.066378  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:06.566985  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:07.066671  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:07.566514  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:08.067086  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:08.566885  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:09.320104  459268 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.972016021s)
	I0510 19:28:09.320180  459268 main.go:141] libmachine: Making call to close driver server
	I0510 19:28:09.320206  459268 main.go:141] libmachine: (embed-certs-483140) Calling .Close
	I0510 19:28:09.320585  459268 main.go:141] libmachine: (embed-certs-483140) DBG | Closing plugin on server side
	I0510 19:28:09.320633  459268 main.go:141] libmachine: Successfully made call to close driver server
	I0510 19:28:09.320643  459268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 19:28:09.320652  459268 main.go:141] libmachine: Making call to close driver server
	I0510 19:28:09.320660  459268 main.go:141] libmachine: (embed-certs-483140) Calling .Close
	I0510 19:28:09.320941  459268 main.go:141] libmachine: (embed-certs-483140) DBG | Closing plugin on server side
	I0510 19:28:09.320962  459268 main.go:141] libmachine: Successfully made call to close driver server
	I0510 19:28:09.320975  459268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 19:28:09.323341  459268 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-483140 addons enable metrics-server
	
	I0510 19:28:09.324636  459268 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0510 19:28:09.325664  459268 addons.go:514] duration metric: took 3.511519103s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0510 19:28:09.325722  459268 start.go:246] waiting for cluster config update ...
	I0510 19:28:09.325741  459268 start.go:255] writing updated cluster config ...
	I0510 19:28:09.326092  459268 ssh_runner.go:195] Run: rm -f paused
	I0510 19:28:09.344642  459268 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0510 19:28:09.354144  459268 pod_ready.go:83] waiting for pod "coredns-674b8bbfcf-4ld9c" in "kube-system" namespace to be "Ready" or be gone ...
	W0510 19:28:11.360637  459268 pod_ready.go:104] pod "coredns-674b8bbfcf-4ld9c" is not "Ready", error: <nil>
	W0510 19:28:13.860282  459268 pod_ready.go:104] pod "coredns-674b8bbfcf-4ld9c" is not "Ready", error: <nil>
	I0510 19:28:09.066763  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:09.566992  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:10.066908  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:10.566843  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:11.066514  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:11.566388  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:12.066218  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:12.566934  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:13.066645  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:13.567085  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0510 19:28:15.860630  459268 pod_ready.go:104] pod "coredns-674b8bbfcf-4ld9c" is not "Ready", error: <nil>
	I0510 19:28:17.393207  459268 pod_ready.go:94] pod "coredns-674b8bbfcf-4ld9c" is "Ready"
	I0510 19:28:17.393237  459268 pod_ready.go:86] duration metric: took 8.039060776s for pod "coredns-674b8bbfcf-4ld9c" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:28:17.418993  459268 pod_ready.go:83] waiting for pod "etcd-embed-certs-483140" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:28:17.429049  459268 pod_ready.go:94] pod "etcd-embed-certs-483140" is "Ready"
	I0510 19:28:17.429081  459268 pod_ready.go:86] duration metric: took 10.055799ms for pod "etcd-embed-certs-483140" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:28:17.432083  459268 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-483140" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:28:17.437554  459268 pod_ready.go:94] pod "kube-apiserver-embed-certs-483140" is "Ready"
	I0510 19:28:17.437591  459268 pod_ready.go:86] duration metric: took 5.476778ms for pod "kube-apiserver-embed-certs-483140" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:28:17.440334  459268 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-483140" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:28:17.557594  459268 pod_ready.go:94] pod "kube-controller-manager-embed-certs-483140" is "Ready"
	I0510 19:28:17.557622  459268 pod_ready.go:86] duration metric: took 117.264734ms for pod "kube-controller-manager-embed-certs-483140" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:28:17.769743  459268 pod_ready.go:83] waiting for pod "kube-proxy-b2gvg" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:28:18.158013  459268 pod_ready.go:94] pod "kube-proxy-b2gvg" is "Ready"
	I0510 19:28:18.158042  459268 pod_ready.go:86] duration metric: took 388.270745ms for pod "kube-proxy-b2gvg" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:28:18.379133  459268 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-483140" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:28:18.758017  459268 pod_ready.go:94] pod "kube-scheduler-embed-certs-483140" is "Ready"
	I0510 19:28:18.758052  459268 pod_ready.go:86] duration metric: took 378.881401ms for pod "kube-scheduler-embed-certs-483140" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 19:28:18.758067  459268 pod_ready.go:40] duration metric: took 9.413376926s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0510 19:28:18.804476  459268 start.go:607] kubectl: 1.33.0, cluster: 1.33.0 (minor skew: 0)
	I0510 19:28:18.807325  459268 out.go:177] * Done! kubectl is now configured to use "embed-certs-483140" cluster and "default" namespace by default
	I0510 19:28:14.066994  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:14.567064  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:15.066411  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:15.567220  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:16.067320  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:16.566859  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:17.066625  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:17.566521  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:18.066671  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:18.566592  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:19.066253  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:19.566860  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:20.066367  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:20.567118  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:21.067193  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:21.566937  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:22.066333  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:22.567056  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:23.066988  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:23.566331  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:24.066265  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:24.566513  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:25.067048  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:25.567212  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:26.067158  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:26.566324  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:27.066325  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:27.566435  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:28.067014  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:28.566560  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:29.066490  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:29.567080  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:30.067132  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:30.566495  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:31.066973  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:31.566321  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:32.067212  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:32.566665  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:33.066716  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:33.566326  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:34.067017  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:34.566429  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:35.067039  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:35.566936  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:36.066553  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:36.566402  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:37.066800  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:37.566267  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:38.066188  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:38.567060  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:28:38.567180  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:28:38.614003  459056 cri.go:89] found id: ""
	I0510 19:28:38.614094  459056 logs.go:282] 0 containers: []
	W0510 19:28:38.614120  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:28:38.614132  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:28:38.614211  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:28:38.651000  459056 cri.go:89] found id: ""
	I0510 19:28:38.651034  459056 logs.go:282] 0 containers: []
	W0510 19:28:38.651046  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:28:38.651053  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:28:38.651121  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:28:38.688211  459056 cri.go:89] found id: ""
	I0510 19:28:38.688238  459056 logs.go:282] 0 containers: []
	W0510 19:28:38.688246  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:28:38.688252  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:28:38.688318  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:28:38.726904  459056 cri.go:89] found id: ""
	I0510 19:28:38.726933  459056 logs.go:282] 0 containers: []
	W0510 19:28:38.726953  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:28:38.726963  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:28:38.727020  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:28:38.764293  459056 cri.go:89] found id: ""
	I0510 19:28:38.764321  459056 logs.go:282] 0 containers: []
	W0510 19:28:38.764330  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:28:38.764335  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:28:38.764390  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:28:38.802044  459056 cri.go:89] found id: ""
	I0510 19:28:38.802075  459056 logs.go:282] 0 containers: []
	W0510 19:28:38.802083  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:28:38.802104  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:28:38.802160  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:28:38.840951  459056 cri.go:89] found id: ""
	I0510 19:28:38.840991  459056 logs.go:282] 0 containers: []
	W0510 19:28:38.841002  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:28:38.841010  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:28:38.841098  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:28:38.879478  459056 cri.go:89] found id: ""
	I0510 19:28:38.879514  459056 logs.go:282] 0 containers: []
	W0510 19:28:38.879522  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:28:38.879533  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:28:38.879548  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:28:38.932148  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:28:38.932193  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:28:38.947813  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:28:38.947845  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:28:39.094230  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:28:39.094264  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:28:39.094283  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:28:39.170356  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:28:39.170406  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:28:41.716545  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:41.734713  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:28:41.734791  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:28:41.772135  459056 cri.go:89] found id: ""
	I0510 19:28:41.772178  459056 logs.go:282] 0 containers: []
	W0510 19:28:41.772187  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:28:41.772193  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:28:41.772246  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:28:41.810841  459056 cri.go:89] found id: ""
	I0510 19:28:41.810875  459056 logs.go:282] 0 containers: []
	W0510 19:28:41.810886  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:28:41.810893  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:28:41.810969  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:28:41.848600  459056 cri.go:89] found id: ""
	I0510 19:28:41.848627  459056 logs.go:282] 0 containers: []
	W0510 19:28:41.848636  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:28:41.848643  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:28:41.848735  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:28:41.887214  459056 cri.go:89] found id: ""
	I0510 19:28:41.887261  459056 logs.go:282] 0 containers: []
	W0510 19:28:41.887273  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:28:41.887282  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:28:41.887353  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:28:41.926422  459056 cri.go:89] found id: ""
	I0510 19:28:41.926455  459056 logs.go:282] 0 containers: []
	W0510 19:28:41.926466  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:28:41.926474  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:28:41.926573  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:28:41.963547  459056 cri.go:89] found id: ""
	I0510 19:28:41.963582  459056 logs.go:282] 0 containers: []
	W0510 19:28:41.963595  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:28:41.963625  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:28:41.963699  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:28:42.007903  459056 cri.go:89] found id: ""
	I0510 19:28:42.007930  459056 logs.go:282] 0 containers: []
	W0510 19:28:42.007938  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:28:42.007943  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:28:42.007996  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:28:42.048020  459056 cri.go:89] found id: ""
	I0510 19:28:42.048054  459056 logs.go:282] 0 containers: []
	W0510 19:28:42.048062  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:28:42.048072  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:28:42.048085  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:28:42.099210  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:28:42.099267  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:28:42.114915  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:28:42.114947  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:28:42.196330  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:28:42.196364  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:28:42.196380  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:28:42.278729  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:28:42.278786  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:28:44.825880  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:44.844164  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:28:44.844258  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:28:44.883963  459056 cri.go:89] found id: ""
	I0510 19:28:44.883992  459056 logs.go:282] 0 containers: []
	W0510 19:28:44.884001  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:28:44.884008  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:28:44.884085  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:28:44.920183  459056 cri.go:89] found id: ""
	I0510 19:28:44.920214  459056 logs.go:282] 0 containers: []
	W0510 19:28:44.920222  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:28:44.920228  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:28:44.920304  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:28:44.956038  459056 cri.go:89] found id: ""
	I0510 19:28:44.956072  459056 logs.go:282] 0 containers: []
	W0510 19:28:44.956087  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:28:44.956093  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:28:44.956165  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:28:44.992412  459056 cri.go:89] found id: ""
	I0510 19:28:44.992448  459056 logs.go:282] 0 containers: []
	W0510 19:28:44.992460  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:28:44.992468  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:28:44.992540  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:28:45.029970  459056 cri.go:89] found id: ""
	I0510 19:28:45.030008  459056 logs.go:282] 0 containers: []
	W0510 19:28:45.030020  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:28:45.030027  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:28:45.030097  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:28:45.065606  459056 cri.go:89] found id: ""
	I0510 19:28:45.065643  459056 logs.go:282] 0 containers: []
	W0510 19:28:45.065654  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:28:45.065662  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:28:45.065736  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:28:45.102978  459056 cri.go:89] found id: ""
	I0510 19:28:45.103009  459056 logs.go:282] 0 containers: []
	W0510 19:28:45.103018  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:28:45.103024  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:28:45.103087  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:28:45.143725  459056 cri.go:89] found id: ""
	I0510 19:28:45.143752  459056 logs.go:282] 0 containers: []
	W0510 19:28:45.143761  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:28:45.143771  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:28:45.143783  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:28:45.187406  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:28:45.187443  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:28:45.237672  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:28:45.237725  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:28:45.253387  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:28:45.253425  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:28:45.326218  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:28:45.326246  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:28:45.326265  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:28:47.904696  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:47.922232  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:28:47.922326  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:28:47.964247  459056 cri.go:89] found id: ""
	I0510 19:28:47.964284  459056 logs.go:282] 0 containers: []
	W0510 19:28:47.964293  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:28:47.964299  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:28:47.964358  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:28:48.001130  459056 cri.go:89] found id: ""
	I0510 19:28:48.001159  459056 logs.go:282] 0 containers: []
	W0510 19:28:48.001167  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:28:48.001175  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:28:48.001245  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:28:48.038486  459056 cri.go:89] found id: ""
	I0510 19:28:48.038519  459056 logs.go:282] 0 containers: []
	W0510 19:28:48.038528  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:28:48.038534  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:28:48.038604  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:28:48.073594  459056 cri.go:89] found id: ""
	I0510 19:28:48.073628  459056 logs.go:282] 0 containers: []
	W0510 19:28:48.073636  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:28:48.073643  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:28:48.073716  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:28:48.113159  459056 cri.go:89] found id: ""
	I0510 19:28:48.113191  459056 logs.go:282] 0 containers: []
	W0510 19:28:48.113199  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:28:48.113205  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:28:48.113271  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:28:48.158534  459056 cri.go:89] found id: ""
	I0510 19:28:48.158570  459056 logs.go:282] 0 containers: []
	W0510 19:28:48.158581  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:28:48.158589  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:28:48.158661  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:28:48.194840  459056 cri.go:89] found id: ""
	I0510 19:28:48.194871  459056 logs.go:282] 0 containers: []
	W0510 19:28:48.194883  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:28:48.194889  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:28:48.194952  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:28:48.233411  459056 cri.go:89] found id: ""
	I0510 19:28:48.233446  459056 logs.go:282] 0 containers: []
	W0510 19:28:48.233455  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:28:48.233465  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:28:48.233481  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:28:48.248955  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:28:48.248988  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:28:48.321462  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:28:48.321486  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:28:48.321499  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:28:48.413091  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:28:48.413139  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:28:48.455370  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:28:48.455417  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:28:51.008549  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:51.026088  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:28:51.026175  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:28:51.065801  459056 cri.go:89] found id: ""
	I0510 19:28:51.065834  459056 logs.go:282] 0 containers: []
	W0510 19:28:51.065844  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:28:51.065850  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:28:51.065915  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:28:51.108971  459056 cri.go:89] found id: ""
	I0510 19:28:51.109002  459056 logs.go:282] 0 containers: []
	W0510 19:28:51.109010  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:28:51.109017  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:28:51.109081  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:28:51.153399  459056 cri.go:89] found id: ""
	I0510 19:28:51.153425  459056 logs.go:282] 0 containers: []
	W0510 19:28:51.153434  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:28:51.153440  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:28:51.153501  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:28:51.193120  459056 cri.go:89] found id: ""
	I0510 19:28:51.193150  459056 logs.go:282] 0 containers: []
	W0510 19:28:51.193159  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:28:51.193165  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:28:51.193219  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:28:51.232126  459056 cri.go:89] found id: ""
	I0510 19:28:51.232160  459056 logs.go:282] 0 containers: []
	W0510 19:28:51.232169  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:28:51.232176  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:28:51.232262  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:28:51.271265  459056 cri.go:89] found id: ""
	I0510 19:28:51.271292  459056 logs.go:282] 0 containers: []
	W0510 19:28:51.271300  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:28:51.271306  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:28:51.271380  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:28:51.314653  459056 cri.go:89] found id: ""
	I0510 19:28:51.314687  459056 logs.go:282] 0 containers: []
	W0510 19:28:51.314698  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:28:51.314710  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:28:51.314788  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:28:51.353697  459056 cri.go:89] found id: ""
	I0510 19:28:51.353726  459056 logs.go:282] 0 containers: []
	W0510 19:28:51.353734  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:28:51.353746  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:28:51.353762  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:28:51.406474  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:28:51.406515  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:28:51.423057  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:28:51.423092  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:28:51.501527  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:28:51.501551  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:28:51.501563  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:28:51.582228  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:28:51.582278  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:28:54.132967  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:54.161653  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:28:54.161729  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:28:54.201063  459056 cri.go:89] found id: ""
	I0510 19:28:54.201098  459056 logs.go:282] 0 containers: []
	W0510 19:28:54.201111  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:28:54.201120  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:28:54.201200  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:28:54.241268  459056 cri.go:89] found id: ""
	I0510 19:28:54.241298  459056 logs.go:282] 0 containers: []
	W0510 19:28:54.241307  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:28:54.241320  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:28:54.241388  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:28:54.279508  459056 cri.go:89] found id: ""
	I0510 19:28:54.279540  459056 logs.go:282] 0 containers: []
	W0510 19:28:54.279549  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:28:54.279555  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:28:54.279621  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:28:54.322256  459056 cri.go:89] found id: ""
	I0510 19:28:54.322295  459056 logs.go:282] 0 containers: []
	W0510 19:28:54.322306  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:28:54.322349  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:28:54.322423  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:28:54.360014  459056 cri.go:89] found id: ""
	I0510 19:28:54.360051  459056 logs.go:282] 0 containers: []
	W0510 19:28:54.360062  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:28:54.360071  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:28:54.360149  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:28:54.399429  459056 cri.go:89] found id: ""
	I0510 19:28:54.399462  459056 logs.go:282] 0 containers: []
	W0510 19:28:54.399473  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:28:54.399479  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:28:54.399544  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:28:54.437094  459056 cri.go:89] found id: ""
	I0510 19:28:54.437120  459056 logs.go:282] 0 containers: []
	W0510 19:28:54.437129  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:28:54.437135  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:28:54.437213  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:28:54.473964  459056 cri.go:89] found id: ""
	I0510 19:28:54.474000  459056 logs.go:282] 0 containers: []
	W0510 19:28:54.474012  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:28:54.474024  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:28:54.474037  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:28:54.526415  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:28:54.526458  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:28:54.542142  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:28:54.542177  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:28:54.618555  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:28:54.618582  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:28:54.618600  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:28:54.695979  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:28:54.696026  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:28:57.241583  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:28:57.259270  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:28:57.259347  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:28:57.297603  459056 cri.go:89] found id: ""
	I0510 19:28:57.297640  459056 logs.go:282] 0 containers: []
	W0510 19:28:57.297648  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:28:57.297664  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:28:57.297734  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:28:57.339031  459056 cri.go:89] found id: ""
	I0510 19:28:57.339063  459056 logs.go:282] 0 containers: []
	W0510 19:28:57.339072  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:28:57.339090  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:28:57.339167  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:28:57.375753  459056 cri.go:89] found id: ""
	I0510 19:28:57.375783  459056 logs.go:282] 0 containers: []
	W0510 19:28:57.375792  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:28:57.375799  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:28:57.375855  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:28:57.414729  459056 cri.go:89] found id: ""
	I0510 19:28:57.414758  459056 logs.go:282] 0 containers: []
	W0510 19:28:57.414770  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:28:57.414779  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:28:57.414854  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:28:57.453265  459056 cri.go:89] found id: ""
	I0510 19:28:57.453298  459056 logs.go:282] 0 containers: []
	W0510 19:28:57.453309  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:28:57.453318  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:28:57.453379  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:28:57.491548  459056 cri.go:89] found id: ""
	I0510 19:28:57.491579  459056 logs.go:282] 0 containers: []
	W0510 19:28:57.491587  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:28:57.491594  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:28:57.491670  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:28:57.529795  459056 cri.go:89] found id: ""
	I0510 19:28:57.529822  459056 logs.go:282] 0 containers: []
	W0510 19:28:57.529831  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:28:57.529837  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:28:57.529901  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:28:57.570146  459056 cri.go:89] found id: ""
	I0510 19:28:57.570177  459056 logs.go:282] 0 containers: []
	W0510 19:28:57.570186  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:28:57.570196  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:28:57.570211  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:28:57.622879  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:28:57.622928  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:28:57.639210  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:28:57.639256  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:28:57.717348  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:28:57.717382  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:28:57.717399  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:28:57.799663  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:28:57.799716  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:00.351909  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:00.369231  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:00.369300  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:00.419696  459056 cri.go:89] found id: ""
	I0510 19:29:00.419730  459056 logs.go:282] 0 containers: []
	W0510 19:29:00.419740  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:00.419747  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:00.419810  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:00.456741  459056 cri.go:89] found id: ""
	I0510 19:29:00.456847  459056 logs.go:282] 0 containers: []
	W0510 19:29:00.456865  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:00.456874  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:00.456956  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:00.495771  459056 cri.go:89] found id: ""
	I0510 19:29:00.495816  459056 logs.go:282] 0 containers: []
	W0510 19:29:00.495829  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:00.495839  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:00.495919  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:00.541754  459056 cri.go:89] found id: ""
	I0510 19:29:00.541791  459056 logs.go:282] 0 containers: []
	W0510 19:29:00.541803  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:00.541812  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:00.541892  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:00.584200  459056 cri.go:89] found id: ""
	I0510 19:29:00.584230  459056 logs.go:282] 0 containers: []
	W0510 19:29:00.584239  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:00.584245  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:00.584336  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:00.632920  459056 cri.go:89] found id: ""
	I0510 19:29:00.632949  459056 logs.go:282] 0 containers: []
	W0510 19:29:00.632960  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:00.632969  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:00.633033  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:00.684270  459056 cri.go:89] found id: ""
	I0510 19:29:00.684300  459056 logs.go:282] 0 containers: []
	W0510 19:29:00.684309  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:00.684315  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:00.684368  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:00.722259  459056 cri.go:89] found id: ""
	I0510 19:29:00.722292  459056 logs.go:282] 0 containers: []
	W0510 19:29:00.722301  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:00.722311  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:00.722328  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:00.737395  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:00.737431  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:00.816432  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:00.816465  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:00.816485  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:00.900576  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:00.900631  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:00.946239  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:00.946285  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:03.499135  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:03.516795  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:03.516874  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:03.561554  459056 cri.go:89] found id: ""
	I0510 19:29:03.561589  459056 logs.go:282] 0 containers: []
	W0510 19:29:03.561599  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:03.561607  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:03.561674  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:03.604183  459056 cri.go:89] found id: ""
	I0510 19:29:03.604213  459056 logs.go:282] 0 containers: []
	W0510 19:29:03.604221  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:03.604227  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:03.604297  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:03.641319  459056 cri.go:89] found id: ""
	I0510 19:29:03.641350  459056 logs.go:282] 0 containers: []
	W0510 19:29:03.641359  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:03.641366  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:03.641431  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:03.679306  459056 cri.go:89] found id: ""
	I0510 19:29:03.679345  459056 logs.go:282] 0 containers: []
	W0510 19:29:03.679356  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:03.679364  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:03.679444  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:03.720380  459056 cri.go:89] found id: ""
	I0510 19:29:03.720412  459056 logs.go:282] 0 containers: []
	W0510 19:29:03.720420  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:03.720426  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:03.720497  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:03.758115  459056 cri.go:89] found id: ""
	I0510 19:29:03.758183  459056 logs.go:282] 0 containers: []
	W0510 19:29:03.758193  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:03.758206  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:03.758283  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:03.797182  459056 cri.go:89] found id: ""
	I0510 19:29:03.797215  459056 logs.go:282] 0 containers: []
	W0510 19:29:03.797226  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:03.797235  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:03.797294  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:03.837236  459056 cri.go:89] found id: ""
	I0510 19:29:03.837266  459056 logs.go:282] 0 containers: []
	W0510 19:29:03.837274  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:03.837284  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:03.837302  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:03.886362  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:03.886412  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:03.902546  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:03.902581  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:03.980181  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:03.980206  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:03.980219  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:04.060587  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:04.060641  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:06.606310  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:06.633919  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:06.634001  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:06.672938  459056 cri.go:89] found id: ""
	I0510 19:29:06.672969  459056 logs.go:282] 0 containers: []
	W0510 19:29:06.672978  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:06.672986  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:06.673047  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:06.711567  459056 cri.go:89] found id: ""
	I0510 19:29:06.711603  459056 logs.go:282] 0 containers: []
	W0510 19:29:06.711615  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:06.711624  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:06.711710  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:06.752456  459056 cri.go:89] found id: ""
	I0510 19:29:06.752498  459056 logs.go:282] 0 containers: []
	W0510 19:29:06.752510  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:06.752520  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:06.752592  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:06.792722  459056 cri.go:89] found id: ""
	I0510 19:29:06.792755  459056 logs.go:282] 0 containers: []
	W0510 19:29:06.792764  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:06.792771  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:06.792832  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:06.833199  459056 cri.go:89] found id: ""
	I0510 19:29:06.833231  459056 logs.go:282] 0 containers: []
	W0510 19:29:06.833239  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:06.833246  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:06.833300  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:06.871347  459056 cri.go:89] found id: ""
	I0510 19:29:06.871378  459056 logs.go:282] 0 containers: []
	W0510 19:29:06.871386  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:06.871393  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:06.871448  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:06.909447  459056 cri.go:89] found id: ""
	I0510 19:29:06.909478  459056 logs.go:282] 0 containers: []
	W0510 19:29:06.909489  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:06.909497  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:06.909561  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:06.945795  459056 cri.go:89] found id: ""
	I0510 19:29:06.945829  459056 logs.go:282] 0 containers: []
	W0510 19:29:06.945837  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:06.945847  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:06.945861  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:07.028777  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:07.028825  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:07.070640  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:07.070673  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:07.124335  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:07.124383  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:07.140167  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:07.140197  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:07.218319  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:09.718885  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:09.737619  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:09.737701  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:09.775164  459056 cri.go:89] found id: ""
	I0510 19:29:09.775203  459056 logs.go:282] 0 containers: []
	W0510 19:29:09.775211  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:09.775218  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:09.775292  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:09.819357  459056 cri.go:89] found id: ""
	I0510 19:29:09.819395  459056 logs.go:282] 0 containers: []
	W0510 19:29:09.819406  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:09.819415  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:09.819490  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:09.858894  459056 cri.go:89] found id: ""
	I0510 19:29:09.858928  459056 logs.go:282] 0 containers: []
	W0510 19:29:09.858937  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:09.858942  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:09.858996  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:09.895496  459056 cri.go:89] found id: ""
	I0510 19:29:09.895543  459056 logs.go:282] 0 containers: []
	W0510 19:29:09.895554  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:09.895562  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:09.895629  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:09.935443  459056 cri.go:89] found id: ""
	I0510 19:29:09.935476  459056 logs.go:282] 0 containers: []
	W0510 19:29:09.935484  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:09.935490  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:09.935552  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:09.975013  459056 cri.go:89] found id: ""
	I0510 19:29:09.975050  459056 logs.go:282] 0 containers: []
	W0510 19:29:09.975059  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:09.975066  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:09.975122  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:10.017332  459056 cri.go:89] found id: ""
	I0510 19:29:10.017364  459056 logs.go:282] 0 containers: []
	W0510 19:29:10.017372  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:10.017378  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:10.017432  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:10.054109  459056 cri.go:89] found id: ""
	I0510 19:29:10.054145  459056 logs.go:282] 0 containers: []
	W0510 19:29:10.054157  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:10.054169  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:10.054187  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:10.107219  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:10.107275  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:10.122900  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:10.122946  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:10.197374  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:10.197402  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:10.197423  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:10.276176  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:10.276222  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:12.822189  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:12.839516  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:12.839586  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:12.876495  459056 cri.go:89] found id: ""
	I0510 19:29:12.876532  459056 logs.go:282] 0 containers: []
	W0510 19:29:12.876544  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:12.876553  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:12.876628  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:12.914537  459056 cri.go:89] found id: ""
	I0510 19:29:12.914571  459056 logs.go:282] 0 containers: []
	W0510 19:29:12.914581  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:12.914587  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:12.914662  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:12.953369  459056 cri.go:89] found id: ""
	I0510 19:29:12.953403  459056 logs.go:282] 0 containers: []
	W0510 19:29:12.953412  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:12.953418  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:12.953475  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:12.991117  459056 cri.go:89] found id: ""
	I0510 19:29:12.991150  459056 logs.go:282] 0 containers: []
	W0510 19:29:12.991159  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:12.991167  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:12.991226  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:13.035209  459056 cri.go:89] found id: ""
	I0510 19:29:13.035268  459056 logs.go:282] 0 containers: []
	W0510 19:29:13.035281  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:13.035290  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:13.035364  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:13.072156  459056 cri.go:89] found id: ""
	I0510 19:29:13.072191  459056 logs.go:282] 0 containers: []
	W0510 19:29:13.072203  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:13.072211  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:13.072279  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:13.108863  459056 cri.go:89] found id: ""
	I0510 19:29:13.108893  459056 logs.go:282] 0 containers: []
	W0510 19:29:13.108903  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:13.108910  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:13.108967  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:13.155406  459056 cri.go:89] found id: ""
	I0510 19:29:13.155437  459056 logs.go:282] 0 containers: []
	W0510 19:29:13.155445  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:13.155455  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:13.155467  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:13.208638  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:13.208694  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:13.225071  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:13.225107  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:13.300472  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:13.300498  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:13.300515  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:13.380669  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:13.380714  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:15.924108  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:15.941384  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:15.941465  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:15.984230  459056 cri.go:89] found id: ""
	I0510 19:29:15.984259  459056 logs.go:282] 0 containers: []
	W0510 19:29:15.984267  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:15.984273  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:15.984328  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:16.022696  459056 cri.go:89] found id: ""
	I0510 19:29:16.022725  459056 logs.go:282] 0 containers: []
	W0510 19:29:16.022733  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:16.022740  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:16.022818  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:16.064311  459056 cri.go:89] found id: ""
	I0510 19:29:16.064344  459056 logs.go:282] 0 containers: []
	W0510 19:29:16.064356  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:16.064364  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:16.064432  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:16.110646  459056 cri.go:89] found id: ""
	I0510 19:29:16.110680  459056 logs.go:282] 0 containers: []
	W0510 19:29:16.110688  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:16.110695  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:16.110779  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:16.155423  459056 cri.go:89] found id: ""
	I0510 19:29:16.155466  459056 logs.go:282] 0 containers: []
	W0510 19:29:16.155478  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:16.155485  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:16.155560  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:16.199404  459056 cri.go:89] found id: ""
	I0510 19:29:16.199437  459056 logs.go:282] 0 containers: []
	W0510 19:29:16.199445  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:16.199455  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:16.199518  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:16.244501  459056 cri.go:89] found id: ""
	I0510 19:29:16.244532  459056 logs.go:282] 0 containers: []
	W0510 19:29:16.244541  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:16.244547  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:16.244622  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:16.289564  459056 cri.go:89] found id: ""
	I0510 19:29:16.289594  459056 logs.go:282] 0 containers: []
	W0510 19:29:16.289609  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:16.289628  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:16.289645  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:16.339326  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:16.339360  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:16.392002  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:16.392050  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:16.408009  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:16.408039  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:16.480932  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:16.480959  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:16.480972  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:19.062321  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:19.079587  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:19.079667  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:19.122776  459056 cri.go:89] found id: ""
	I0510 19:29:19.122809  459056 logs.go:282] 0 containers: []
	W0510 19:29:19.122817  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:19.122823  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:19.122882  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:19.160116  459056 cri.go:89] found id: ""
	I0510 19:29:19.160154  459056 logs.go:282] 0 containers: []
	W0510 19:29:19.160166  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:19.160175  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:19.160258  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:19.198049  459056 cri.go:89] found id: ""
	I0510 19:29:19.198081  459056 logs.go:282] 0 containers: []
	W0510 19:29:19.198089  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:19.198095  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:19.198151  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:19.236547  459056 cri.go:89] found id: ""
	I0510 19:29:19.236578  459056 logs.go:282] 0 containers: []
	W0510 19:29:19.236587  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:19.236596  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:19.236682  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:19.274688  459056 cri.go:89] found id: ""
	I0510 19:29:19.274727  459056 logs.go:282] 0 containers: []
	W0510 19:29:19.274738  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:19.274746  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:19.274819  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:19.317813  459056 cri.go:89] found id: ""
	I0510 19:29:19.317843  459056 logs.go:282] 0 containers: []
	W0510 19:29:19.317853  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:19.317865  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:19.317934  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:19.360619  459056 cri.go:89] found id: ""
	I0510 19:29:19.360654  459056 logs.go:282] 0 containers: []
	W0510 19:29:19.360663  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:19.360669  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:19.360735  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:19.399001  459056 cri.go:89] found id: ""
	I0510 19:29:19.399030  459056 logs.go:282] 0 containers: []
	W0510 19:29:19.399038  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:19.399048  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:19.399061  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:19.482768  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:19.482819  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:19.525273  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:19.525316  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:19.579149  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:19.579197  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:19.594813  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:19.594853  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:19.667950  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:22.169701  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:22.187665  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:22.187746  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:22.227992  459056 cri.go:89] found id: ""
	I0510 19:29:22.228022  459056 logs.go:282] 0 containers: []
	W0510 19:29:22.228030  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:22.228041  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:22.228164  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:22.267106  459056 cri.go:89] found id: ""
	I0510 19:29:22.267140  459056 logs.go:282] 0 containers: []
	W0510 19:29:22.267149  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:22.267155  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:22.267211  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:22.305600  459056 cri.go:89] found id: ""
	I0510 19:29:22.305628  459056 logs.go:282] 0 containers: []
	W0510 19:29:22.305636  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:22.305643  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:22.305711  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:22.345673  459056 cri.go:89] found id: ""
	I0510 19:29:22.345708  459056 logs.go:282] 0 containers: []
	W0510 19:29:22.345719  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:22.345724  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:22.345778  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:22.384325  459056 cri.go:89] found id: ""
	I0510 19:29:22.384358  459056 logs.go:282] 0 containers: []
	W0510 19:29:22.384371  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:22.384387  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:22.384467  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:22.424747  459056 cri.go:89] found id: ""
	I0510 19:29:22.424779  459056 logs.go:282] 0 containers: []
	W0510 19:29:22.424787  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:22.424794  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:22.424848  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:22.470878  459056 cri.go:89] found id: ""
	I0510 19:29:22.470916  459056 logs.go:282] 0 containers: []
	W0510 19:29:22.470929  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:22.470937  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:22.471010  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:22.515651  459056 cri.go:89] found id: ""
	I0510 19:29:22.515682  459056 logs.go:282] 0 containers: []
	W0510 19:29:22.515693  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:22.515713  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:22.515730  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:22.573654  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:22.573699  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:22.590599  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:22.590637  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:22.670834  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:22.670866  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:22.670882  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:22.754958  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:22.755019  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:25.299898  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:25.317959  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:25.318047  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:25.358952  459056 cri.go:89] found id: ""
	I0510 19:29:25.358990  459056 logs.go:282] 0 containers: []
	W0510 19:29:25.358999  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:25.359005  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:25.359068  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:25.402269  459056 cri.go:89] found id: ""
	I0510 19:29:25.402300  459056 logs.go:282] 0 containers: []
	W0510 19:29:25.402308  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:25.402321  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:25.402402  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:25.441309  459056 cri.go:89] found id: ""
	I0510 19:29:25.441338  459056 logs.go:282] 0 containers: []
	W0510 19:29:25.441348  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:25.441357  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:25.441421  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:25.477026  459056 cri.go:89] found id: ""
	I0510 19:29:25.477073  459056 logs.go:282] 0 containers: []
	W0510 19:29:25.477087  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:25.477095  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:25.477168  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:25.514227  459056 cri.go:89] found id: ""
	I0510 19:29:25.514263  459056 logs.go:282] 0 containers: []
	W0510 19:29:25.514274  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:25.514283  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:25.514357  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:25.552961  459056 cri.go:89] found id: ""
	I0510 19:29:25.552993  459056 logs.go:282] 0 containers: []
	W0510 19:29:25.553002  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:25.553010  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:25.553075  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:25.591284  459056 cri.go:89] found id: ""
	I0510 19:29:25.591315  459056 logs.go:282] 0 containers: []
	W0510 19:29:25.591327  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:25.591336  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:25.591404  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:25.631688  459056 cri.go:89] found id: ""
	I0510 19:29:25.631720  459056 logs.go:282] 0 containers: []
	W0510 19:29:25.631728  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:25.631737  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:25.631750  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:25.686015  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:25.686057  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:25.702233  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:25.702271  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:25.777340  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:25.777373  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:25.777389  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:25.857072  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:25.857118  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:28.400902  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:28.418498  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:28.418570  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:28.454908  459056 cri.go:89] found id: ""
	I0510 19:29:28.454941  459056 logs.go:282] 0 containers: []
	W0510 19:29:28.454950  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:28.454956  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:28.455014  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:28.493646  459056 cri.go:89] found id: ""
	I0510 19:29:28.493682  459056 logs.go:282] 0 containers: []
	W0510 19:29:28.493691  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:28.493700  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:28.493766  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:28.531482  459056 cri.go:89] found id: ""
	I0510 19:29:28.531524  459056 logs.go:282] 0 containers: []
	W0510 19:29:28.531537  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:28.531546  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:28.531618  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:28.568042  459056 cri.go:89] found id: ""
	I0510 19:29:28.568078  459056 logs.go:282] 0 containers: []
	W0510 19:29:28.568087  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:28.568093  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:28.568150  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:28.607141  459056 cri.go:89] found id: ""
	I0510 19:29:28.607172  459056 logs.go:282] 0 containers: []
	W0510 19:29:28.607181  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:28.607187  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:28.607271  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:28.645485  459056 cri.go:89] found id: ""
	I0510 19:29:28.645519  459056 logs.go:282] 0 containers: []
	W0510 19:29:28.645532  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:28.645544  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:28.645618  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:28.685596  459056 cri.go:89] found id: ""
	I0510 19:29:28.685638  459056 logs.go:282] 0 containers: []
	W0510 19:29:28.685649  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:28.685657  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:28.685724  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:28.724977  459056 cri.go:89] found id: ""
	I0510 19:29:28.725005  459056 logs.go:282] 0 containers: []
	W0510 19:29:28.725013  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:28.725023  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:28.725101  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:28.777421  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:28.777476  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:28.793767  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:28.793806  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:28.865581  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:28.865611  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:28.865638  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:28.945845  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:28.945895  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:31.491500  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:31.508822  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:31.508896  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:31.546371  459056 cri.go:89] found id: ""
	I0510 19:29:31.546400  459056 logs.go:282] 0 containers: []
	W0510 19:29:31.546412  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:31.546420  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:31.546478  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:31.588214  459056 cri.go:89] found id: ""
	I0510 19:29:31.588244  459056 logs.go:282] 0 containers: []
	W0510 19:29:31.588252  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:31.588258  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:31.588313  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:31.626683  459056 cri.go:89] found id: ""
	I0510 19:29:31.626718  459056 logs.go:282] 0 containers: []
	W0510 19:29:31.626729  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:31.626737  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:31.626810  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:31.665979  459056 cri.go:89] found id: ""
	I0510 19:29:31.666013  459056 logs.go:282] 0 containers: []
	W0510 19:29:31.666023  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:31.666030  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:31.666087  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:31.702718  459056 cri.go:89] found id: ""
	I0510 19:29:31.702751  459056 logs.go:282] 0 containers: []
	W0510 19:29:31.702767  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:31.702775  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:31.702830  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:31.740496  459056 cri.go:89] found id: ""
	I0510 19:29:31.740530  459056 logs.go:282] 0 containers: []
	W0510 19:29:31.740553  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:31.740561  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:31.740616  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:31.782178  459056 cri.go:89] found id: ""
	I0510 19:29:31.782209  459056 logs.go:282] 0 containers: []
	W0510 19:29:31.782218  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:31.782224  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:31.782278  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:31.817466  459056 cri.go:89] found id: ""
	I0510 19:29:31.817495  459056 logs.go:282] 0 containers: []
	W0510 19:29:31.817503  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:31.817512  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:31.817527  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:31.832641  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:31.832675  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:31.913719  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:31.913745  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:31.913764  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:31.990267  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:31.990316  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:32.033353  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:32.033384  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:34.586504  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:34.606546  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:34.606628  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:34.644492  459056 cri.go:89] found id: ""
	I0510 19:29:34.644526  459056 logs.go:282] 0 containers: []
	W0510 19:29:34.644539  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:34.644547  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:34.644616  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:34.684520  459056 cri.go:89] found id: ""
	I0510 19:29:34.684550  459056 logs.go:282] 0 containers: []
	W0510 19:29:34.684566  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:34.684572  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:34.684627  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:34.722015  459056 cri.go:89] found id: ""
	I0510 19:29:34.722047  459056 logs.go:282] 0 containers: []
	W0510 19:29:34.722055  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:34.722062  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:34.722118  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:34.760175  459056 cri.go:89] found id: ""
	I0510 19:29:34.760203  459056 logs.go:282] 0 containers: []
	W0510 19:29:34.760212  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:34.760219  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:34.760291  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:34.797742  459056 cri.go:89] found id: ""
	I0510 19:29:34.797775  459056 logs.go:282] 0 containers: []
	W0510 19:29:34.797787  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:34.797796  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:34.797870  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:34.834792  459056 cri.go:89] found id: ""
	I0510 19:29:34.834824  459056 logs.go:282] 0 containers: []
	W0510 19:29:34.834832  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:34.834839  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:34.834905  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:34.881683  459056 cri.go:89] found id: ""
	I0510 19:29:34.881720  459056 logs.go:282] 0 containers: []
	W0510 19:29:34.881729  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:34.881738  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:34.881815  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:34.925574  459056 cri.go:89] found id: ""
	I0510 19:29:34.925605  459056 logs.go:282] 0 containers: []
	W0510 19:29:34.925613  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:34.925622  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:34.925636  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:34.977426  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:34.977477  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:34.993190  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:34.993226  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:35.071565  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:35.071590  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:35.071604  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:35.149510  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:35.149563  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:37.697052  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:37.714716  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:37.714828  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:37.752850  459056 cri.go:89] found id: ""
	I0510 19:29:37.752896  459056 logs.go:282] 0 containers: []
	W0510 19:29:37.752909  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:37.752916  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:37.752989  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:37.791810  459056 cri.go:89] found id: ""
	I0510 19:29:37.791847  459056 logs.go:282] 0 containers: []
	W0510 19:29:37.791860  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:37.791868  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:37.791929  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:37.831622  459056 cri.go:89] found id: ""
	I0510 19:29:37.831658  459056 logs.go:282] 0 containers: []
	W0510 19:29:37.831669  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:37.831677  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:37.831755  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:37.873390  459056 cri.go:89] found id: ""
	I0510 19:29:37.873419  459056 logs.go:282] 0 containers: []
	W0510 19:29:37.873427  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:37.873434  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:37.873493  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:37.915385  459056 cri.go:89] found id: ""
	I0510 19:29:37.915421  459056 logs.go:282] 0 containers: []
	W0510 19:29:37.915431  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:37.915439  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:37.915517  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:37.953620  459056 cri.go:89] found id: ""
	I0510 19:29:37.953654  459056 logs.go:282] 0 containers: []
	W0510 19:29:37.953666  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:37.953678  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:37.953772  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:37.991282  459056 cri.go:89] found id: ""
	I0510 19:29:37.991315  459056 logs.go:282] 0 containers: []
	W0510 19:29:37.991328  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:37.991338  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:37.991413  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:38.028482  459056 cri.go:89] found id: ""
	I0510 19:29:38.028520  459056 logs.go:282] 0 containers: []
	W0510 19:29:38.028531  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:38.028545  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:38.028563  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:38.083448  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:38.083506  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:38.099016  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:38.099067  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:38.174538  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:38.174572  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:38.174587  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:38.258394  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:38.258443  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:40.803473  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:40.821814  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:40.821912  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:40.860566  459056 cri.go:89] found id: ""
	I0510 19:29:40.860600  459056 logs.go:282] 0 containers: []
	W0510 19:29:40.860612  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:40.860622  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:40.860683  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:40.897132  459056 cri.go:89] found id: ""
	I0510 19:29:40.897161  459056 logs.go:282] 0 containers: []
	W0510 19:29:40.897169  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:40.897177  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:40.897239  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:40.944583  459056 cri.go:89] found id: ""
	I0510 19:29:40.944622  459056 logs.go:282] 0 containers: []
	W0510 19:29:40.944636  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:40.944645  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:40.944715  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:40.983132  459056 cri.go:89] found id: ""
	I0510 19:29:40.983165  459056 logs.go:282] 0 containers: []
	W0510 19:29:40.983176  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:40.983185  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:40.983283  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:41.020441  459056 cri.go:89] found id: ""
	I0510 19:29:41.020477  459056 logs.go:282] 0 containers: []
	W0510 19:29:41.020486  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:41.020494  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:41.020548  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:41.058522  459056 cri.go:89] found id: ""
	I0510 19:29:41.058562  459056 logs.go:282] 0 containers: []
	W0510 19:29:41.058572  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:41.058579  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:41.058635  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:41.098730  459056 cri.go:89] found id: ""
	I0510 19:29:41.098775  459056 logs.go:282] 0 containers: []
	W0510 19:29:41.098785  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:41.098792  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:41.098854  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:41.139270  459056 cri.go:89] found id: ""
	I0510 19:29:41.139302  459056 logs.go:282] 0 containers: []
	W0510 19:29:41.139310  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:41.139322  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:41.139335  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:41.215383  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:41.215434  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:41.258268  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:41.258314  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:41.313241  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:41.313287  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:41.332109  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:41.332148  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:41.433376  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:43.935156  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:43.953570  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:43.953694  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:43.994014  459056 cri.go:89] found id: ""
	I0510 19:29:43.994049  459056 logs.go:282] 0 containers: []
	W0510 19:29:43.994075  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:43.994083  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:43.994158  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:44.033884  459056 cri.go:89] found id: ""
	I0510 19:29:44.033922  459056 logs.go:282] 0 containers: []
	W0510 19:29:44.033932  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:44.033942  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:44.033999  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:44.075902  459056 cri.go:89] found id: ""
	I0510 19:29:44.075941  459056 logs.go:282] 0 containers: []
	W0510 19:29:44.075950  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:44.075956  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:44.076018  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:44.116711  459056 cri.go:89] found id: ""
	I0510 19:29:44.116745  459056 logs.go:282] 0 containers: []
	W0510 19:29:44.116757  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:44.116779  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:44.116853  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:44.157617  459056 cri.go:89] found id: ""
	I0510 19:29:44.157652  459056 logs.go:282] 0 containers: []
	W0510 19:29:44.157661  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:44.157668  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:44.157727  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:44.197634  459056 cri.go:89] found id: ""
	I0510 19:29:44.197671  459056 logs.go:282] 0 containers: []
	W0510 19:29:44.197679  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:44.197685  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:44.197743  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:44.235756  459056 cri.go:89] found id: ""
	I0510 19:29:44.235797  459056 logs.go:282] 0 containers: []
	W0510 19:29:44.235810  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:44.235818  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:44.235879  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:44.274251  459056 cri.go:89] found id: ""
	I0510 19:29:44.274292  459056 logs.go:282] 0 containers: []
	W0510 19:29:44.274305  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:44.274317  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:44.274337  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:44.318629  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:44.318669  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:44.370941  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:44.370987  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:44.386660  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:44.386697  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:44.463056  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:44.463085  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:44.463103  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:47.046858  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:47.068619  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:47.068705  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:47.119108  459056 cri.go:89] found id: ""
	I0510 19:29:47.119138  459056 logs.go:282] 0 containers: []
	W0510 19:29:47.119148  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:47.119154  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:47.119210  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:47.160941  459056 cri.go:89] found id: ""
	I0510 19:29:47.160974  459056 logs.go:282] 0 containers: []
	W0510 19:29:47.160982  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:47.160988  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:47.161050  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:47.210420  459056 cri.go:89] found id: ""
	I0510 19:29:47.210452  459056 logs.go:282] 0 containers: []
	W0510 19:29:47.210460  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:47.210466  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:47.210520  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:47.250554  459056 cri.go:89] found id: ""
	I0510 19:29:47.250591  459056 logs.go:282] 0 containers: []
	W0510 19:29:47.250600  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:47.250612  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:47.250674  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:47.290621  459056 cri.go:89] found id: ""
	I0510 19:29:47.290656  459056 logs.go:282] 0 containers: []
	W0510 19:29:47.290667  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:47.290676  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:47.290749  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:47.331044  459056 cri.go:89] found id: ""
	I0510 19:29:47.331079  459056 logs.go:282] 0 containers: []
	W0510 19:29:47.331091  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:47.331100  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:47.331162  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:47.369926  459056 cri.go:89] found id: ""
	I0510 19:29:47.369958  459056 logs.go:282] 0 containers: []
	W0510 19:29:47.369967  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:47.369973  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:47.370047  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:47.410658  459056 cri.go:89] found id: ""
	I0510 19:29:47.410699  459056 logs.go:282] 0 containers: []
	W0510 19:29:47.410708  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:47.410723  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:47.410737  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:47.489045  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:47.489100  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:47.536078  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:47.536117  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:47.588663  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:47.588727  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:47.606182  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:47.606220  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:47.680331  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:50.180849  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:50.198636  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:50.198740  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:50.238270  459056 cri.go:89] found id: ""
	I0510 19:29:50.238301  459056 logs.go:282] 0 containers: []
	W0510 19:29:50.238314  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:50.238323  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:50.238399  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:50.276207  459056 cri.go:89] found id: ""
	I0510 19:29:50.276244  459056 logs.go:282] 0 containers: []
	W0510 19:29:50.276256  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:50.276264  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:50.276333  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:50.311826  459056 cri.go:89] found id: ""
	I0510 19:29:50.311864  459056 logs.go:282] 0 containers: []
	W0510 19:29:50.311875  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:50.311884  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:50.311961  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:50.347100  459056 cri.go:89] found id: ""
	I0510 19:29:50.347133  459056 logs.go:282] 0 containers: []
	W0510 19:29:50.347142  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:50.347151  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:50.347229  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:50.382788  459056 cri.go:89] found id: ""
	I0510 19:29:50.382816  459056 logs.go:282] 0 containers: []
	W0510 19:29:50.382824  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:50.382830  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:50.382898  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:50.420656  459056 cri.go:89] found id: ""
	I0510 19:29:50.420700  459056 logs.go:282] 0 containers: []
	W0510 19:29:50.420709  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:50.420722  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:50.420782  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:50.460911  459056 cri.go:89] found id: ""
	I0510 19:29:50.460948  459056 logs.go:282] 0 containers: []
	W0510 19:29:50.460956  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:50.460962  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:50.461016  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:50.498074  459056 cri.go:89] found id: ""
	I0510 19:29:50.498109  459056 logs.go:282] 0 containers: []
	W0510 19:29:50.498122  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:50.498135  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:50.498152  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:50.576436  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:50.576486  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:50.620554  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:50.620594  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:50.672242  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:50.672292  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:50.688401  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:50.688435  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:50.765125  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:53.266941  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:53.285235  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:53.285306  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:53.327821  459056 cri.go:89] found id: ""
	I0510 19:29:53.327872  459056 logs.go:282] 0 containers: []
	W0510 19:29:53.327880  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:53.327888  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:53.327971  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:53.367170  459056 cri.go:89] found id: ""
	I0510 19:29:53.367212  459056 logs.go:282] 0 containers: []
	W0510 19:29:53.367224  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:53.367254  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:53.367338  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:53.411071  459056 cri.go:89] found id: ""
	I0510 19:29:53.411104  459056 logs.go:282] 0 containers: []
	W0510 19:29:53.411112  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:53.411119  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:53.411194  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:53.451093  459056 cri.go:89] found id: ""
	I0510 19:29:53.451160  459056 logs.go:282] 0 containers: []
	W0510 19:29:53.451175  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:53.451184  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:53.451278  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:53.490305  459056 cri.go:89] found id: ""
	I0510 19:29:53.490337  459056 logs.go:282] 0 containers: []
	W0510 19:29:53.490345  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:53.490351  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:53.490421  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:53.529657  459056 cri.go:89] found id: ""
	I0510 19:29:53.529703  459056 logs.go:282] 0 containers: []
	W0510 19:29:53.529716  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:53.529728  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:53.529801  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:53.570169  459056 cri.go:89] found id: ""
	I0510 19:29:53.570211  459056 logs.go:282] 0 containers: []
	W0510 19:29:53.570223  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:53.570232  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:53.570300  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:53.613547  459056 cri.go:89] found id: ""
	I0510 19:29:53.613576  459056 logs.go:282] 0 containers: []
	W0510 19:29:53.613584  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:53.613593  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:53.613607  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:53.665574  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:53.665633  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:53.682279  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:53.682319  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:53.760795  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:53.760824  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:53.760843  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:53.844386  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:53.844433  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:56.398332  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:56.416456  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:56.416552  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:56.454203  459056 cri.go:89] found id: ""
	I0510 19:29:56.454240  459056 logs.go:282] 0 containers: []
	W0510 19:29:56.454254  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:56.454265  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:56.454350  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:56.492744  459056 cri.go:89] found id: ""
	I0510 19:29:56.492779  459056 logs.go:282] 0 containers: []
	W0510 19:29:56.492791  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:56.492799  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:56.492893  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:56.529891  459056 cri.go:89] found id: ""
	I0510 19:29:56.529924  459056 logs.go:282] 0 containers: []
	W0510 19:29:56.529933  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:56.529943  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:56.530000  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:56.566697  459056 cri.go:89] found id: ""
	I0510 19:29:56.566732  459056 logs.go:282] 0 containers: []
	W0510 19:29:56.566743  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:56.566752  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:56.566816  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:56.608258  459056 cri.go:89] found id: ""
	I0510 19:29:56.608295  459056 logs.go:282] 0 containers: []
	W0510 19:29:56.608307  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:56.608315  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:56.608384  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:56.648700  459056 cri.go:89] found id: ""
	I0510 19:29:56.648734  459056 logs.go:282] 0 containers: []
	W0510 19:29:56.648746  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:56.648755  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:56.648823  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:56.686623  459056 cri.go:89] found id: ""
	I0510 19:29:56.686661  459056 logs.go:282] 0 containers: []
	W0510 19:29:56.686672  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:56.686680  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:56.686750  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:56.726136  459056 cri.go:89] found id: ""
	I0510 19:29:56.726165  459056 logs.go:282] 0 containers: []
	W0510 19:29:56.726180  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:56.726193  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:29:56.726209  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:29:56.777146  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:56.777195  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:56.793496  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:56.793530  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:56.866401  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:56.866436  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:56.866452  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:56.944116  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:56.944168  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:29:59.488989  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:29:59.506161  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:29:59.506233  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:29:59.542854  459056 cri.go:89] found id: ""
	I0510 19:29:59.542891  459056 logs.go:282] 0 containers: []
	W0510 19:29:59.542900  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:29:59.542907  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:29:59.542961  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:29:59.580216  459056 cri.go:89] found id: ""
	I0510 19:29:59.580257  459056 logs.go:282] 0 containers: []
	W0510 19:29:59.580268  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:29:59.580276  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:29:59.580348  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:29:59.623729  459056 cri.go:89] found id: ""
	I0510 19:29:59.623770  459056 logs.go:282] 0 containers: []
	W0510 19:29:59.623781  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:29:59.623790  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:29:59.623854  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:29:59.662414  459056 cri.go:89] found id: ""
	I0510 19:29:59.662447  459056 logs.go:282] 0 containers: []
	W0510 19:29:59.662455  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:29:59.662462  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:29:59.662531  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:29:59.700471  459056 cri.go:89] found id: ""
	I0510 19:29:59.700505  459056 logs.go:282] 0 containers: []
	W0510 19:29:59.700514  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:29:59.700520  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:29:59.700593  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:29:59.740841  459056 cri.go:89] found id: ""
	I0510 19:29:59.740876  459056 logs.go:282] 0 containers: []
	W0510 19:29:59.740884  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:29:59.740891  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:29:59.740944  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:29:59.782895  459056 cri.go:89] found id: ""
	I0510 19:29:59.782937  459056 logs.go:282] 0 containers: []
	W0510 19:29:59.782946  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:29:59.782952  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:29:59.783021  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:29:59.820556  459056 cri.go:89] found id: ""
	I0510 19:29:59.820591  459056 logs.go:282] 0 containers: []
	W0510 19:29:59.820603  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:29:59.820615  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:29:59.820632  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:29:59.835555  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:29:59.835591  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:29:59.907710  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:29:59.907742  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:29:59.907758  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:29:59.983847  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:29:59.983895  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:00.030738  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:00.030782  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:02.583146  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:02.601217  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:02.601290  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:02.638485  459056 cri.go:89] found id: ""
	I0510 19:30:02.638523  459056 logs.go:282] 0 containers: []
	W0510 19:30:02.638536  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:02.638544  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:02.638625  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:02.676096  459056 cri.go:89] found id: ""
	I0510 19:30:02.676124  459056 logs.go:282] 0 containers: []
	W0510 19:30:02.676132  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:02.676138  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:02.676198  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:02.712753  459056 cri.go:89] found id: ""
	I0510 19:30:02.712794  459056 logs.go:282] 0 containers: []
	W0510 19:30:02.712806  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:02.712814  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:02.712889  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:02.750540  459056 cri.go:89] found id: ""
	I0510 19:30:02.750572  459056 logs.go:282] 0 containers: []
	W0510 19:30:02.750580  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:02.750588  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:02.750666  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:02.789337  459056 cri.go:89] found id: ""
	I0510 19:30:02.789372  459056 logs.go:282] 0 containers: []
	W0510 19:30:02.789386  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:02.789394  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:02.789471  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:02.827044  459056 cri.go:89] found id: ""
	I0510 19:30:02.827076  459056 logs.go:282] 0 containers: []
	W0510 19:30:02.827087  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:02.827094  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:02.827154  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:02.867202  459056 cri.go:89] found id: ""
	I0510 19:30:02.867251  459056 logs.go:282] 0 containers: []
	W0510 19:30:02.867264  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:02.867272  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:02.867336  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:02.906104  459056 cri.go:89] found id: ""
	I0510 19:30:02.906136  459056 logs.go:282] 0 containers: []
	W0510 19:30:02.906145  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:02.906155  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:02.906167  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:02.959451  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:02.959504  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:02.975037  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:02.975074  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:03.051037  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:03.051066  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:03.051083  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:03.132615  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:03.132663  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:05.677564  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:05.695683  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:05.695774  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:05.733222  459056 cri.go:89] found id: ""
	I0510 19:30:05.733253  459056 logs.go:282] 0 containers: []
	W0510 19:30:05.733266  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:05.733273  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:05.733343  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:05.775893  459056 cri.go:89] found id: ""
	I0510 19:30:05.775926  459056 logs.go:282] 0 containers: []
	W0510 19:30:05.775938  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:05.775946  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:05.776013  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:05.814170  459056 cri.go:89] found id: ""
	I0510 19:30:05.814201  459056 logs.go:282] 0 containers: []
	W0510 19:30:05.814209  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:05.814215  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:05.814271  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:05.865156  459056 cri.go:89] found id: ""
	I0510 19:30:05.865185  459056 logs.go:282] 0 containers: []
	W0510 19:30:05.865193  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:05.865200  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:05.865267  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:05.904409  459056 cri.go:89] found id: ""
	I0510 19:30:05.904440  459056 logs.go:282] 0 containers: []
	W0510 19:30:05.904449  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:05.904455  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:05.904516  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:05.948278  459056 cri.go:89] found id: ""
	I0510 19:30:05.948308  459056 logs.go:282] 0 containers: []
	W0510 19:30:05.948316  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:05.948322  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:05.948383  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:05.986379  459056 cri.go:89] found id: ""
	I0510 19:30:05.986415  459056 logs.go:282] 0 containers: []
	W0510 19:30:05.986426  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:05.986435  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:05.986502  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:06.030940  459056 cri.go:89] found id: ""
	I0510 19:30:06.030974  459056 logs.go:282] 0 containers: []
	W0510 19:30:06.030984  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:06.030994  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:06.031007  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:06.081923  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:06.081973  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:06.097288  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:06.097321  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:06.169428  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:06.169457  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:06.169471  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:06.247404  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:06.247457  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:08.791138  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:08.810447  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:08.810527  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:08.849947  459056 cri.go:89] found id: ""
	I0510 19:30:08.849983  459056 logs.go:282] 0 containers: []
	W0510 19:30:08.849996  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:08.850005  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:08.850079  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:08.889474  459056 cri.go:89] found id: ""
	I0510 19:30:08.889511  459056 logs.go:282] 0 containers: []
	W0510 19:30:08.889521  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:08.889530  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:08.889605  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:08.929364  459056 cri.go:89] found id: ""
	I0510 19:30:08.929402  459056 logs.go:282] 0 containers: []
	W0510 19:30:08.929414  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:08.929420  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:08.929481  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:08.970260  459056 cri.go:89] found id: ""
	I0510 19:30:08.970292  459056 logs.go:282] 0 containers: []
	W0510 19:30:08.970301  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:08.970312  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:08.970370  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:09.011080  459056 cri.go:89] found id: ""
	I0510 19:30:09.011114  459056 logs.go:282] 0 containers: []
	W0510 19:30:09.011123  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:09.011130  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:09.011192  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:09.050057  459056 cri.go:89] found id: ""
	I0510 19:30:09.050096  459056 logs.go:282] 0 containers: []
	W0510 19:30:09.050106  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:09.050112  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:09.050177  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:09.089408  459056 cri.go:89] found id: ""
	I0510 19:30:09.089454  459056 logs.go:282] 0 containers: []
	W0510 19:30:09.089467  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:09.089484  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:09.089559  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:09.127502  459056 cri.go:89] found id: ""
	I0510 19:30:09.127533  459056 logs.go:282] 0 containers: []
	W0510 19:30:09.127544  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:09.127555  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:09.127573  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:09.177856  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:09.177903  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:09.194009  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:09.194041  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:09.269803  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:09.269833  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:09.269851  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:09.350498  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:09.350562  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:11.895252  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:11.913748  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:11.913819  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:11.957943  459056 cri.go:89] found id: ""
	I0510 19:30:11.957974  459056 logs.go:282] 0 containers: []
	W0510 19:30:11.957982  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:11.957990  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:11.958059  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:11.999707  459056 cri.go:89] found id: ""
	I0510 19:30:11.999735  459056 logs.go:282] 0 containers: []
	W0510 19:30:11.999743  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:11.999750  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:11.999805  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:12.044866  459056 cri.go:89] found id: ""
	I0510 19:30:12.044905  459056 logs.go:282] 0 containers: []
	W0510 19:30:12.044914  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:12.044922  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:12.044980  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:12.083885  459056 cri.go:89] found id: ""
	I0510 19:30:12.083925  459056 logs.go:282] 0 containers: []
	W0510 19:30:12.083938  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:12.083946  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:12.084014  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:12.124186  459056 cri.go:89] found id: ""
	I0510 19:30:12.124223  459056 logs.go:282] 0 containers: []
	W0510 19:30:12.124232  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:12.124239  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:12.124296  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:12.163773  459056 cri.go:89] found id: ""
	I0510 19:30:12.163809  459056 logs.go:282] 0 containers: []
	W0510 19:30:12.163817  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:12.163824  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:12.163887  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:12.208245  459056 cri.go:89] found id: ""
	I0510 19:30:12.208285  459056 logs.go:282] 0 containers: []
	W0510 19:30:12.208297  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:12.208305  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:12.208378  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:12.248816  459056 cri.go:89] found id: ""
	I0510 19:30:12.248855  459056 logs.go:282] 0 containers: []
	W0510 19:30:12.248871  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:12.248885  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:12.248907  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:12.293098  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:12.293137  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:12.346119  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:12.346166  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:12.362174  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:12.362208  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:12.436485  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:12.436514  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:12.436527  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:15.021483  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:15.039908  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:15.039983  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:15.077291  459056 cri.go:89] found id: ""
	I0510 19:30:15.077323  459056 logs.go:282] 0 containers: []
	W0510 19:30:15.077335  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:15.077344  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:15.077417  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:15.119066  459056 cri.go:89] found id: ""
	I0510 19:30:15.119099  459056 logs.go:282] 0 containers: []
	W0510 19:30:15.119108  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:15.119114  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:15.119169  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:15.158927  459056 cri.go:89] found id: ""
	I0510 19:30:15.158957  459056 logs.go:282] 0 containers: []
	W0510 19:30:15.158968  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:15.158976  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:15.159052  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:15.199423  459056 cri.go:89] found id: ""
	I0510 19:30:15.199458  459056 logs.go:282] 0 containers: []
	W0510 19:30:15.199467  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:15.199474  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:15.199538  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:15.237695  459056 cri.go:89] found id: ""
	I0510 19:30:15.237734  459056 logs.go:282] 0 containers: []
	W0510 19:30:15.237744  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:15.237751  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:15.237822  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:15.280652  459056 cri.go:89] found id: ""
	I0510 19:30:15.280693  459056 logs.go:282] 0 containers: []
	W0510 19:30:15.280705  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:15.280721  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:15.280794  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:15.319730  459056 cri.go:89] found id: ""
	I0510 19:30:15.319767  459056 logs.go:282] 0 containers: []
	W0510 19:30:15.319780  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:15.319788  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:15.319861  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:15.361113  459056 cri.go:89] found id: ""
	I0510 19:30:15.361147  459056 logs.go:282] 0 containers: []
	W0510 19:30:15.361156  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:15.361165  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:15.361178  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:15.424953  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:15.425003  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:15.444155  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:15.444187  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:15.520040  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:15.520067  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:15.520080  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:15.595963  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:15.596013  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:18.142672  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:18.160293  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:18.160373  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:18.197867  459056 cri.go:89] found id: ""
	I0510 19:30:18.197911  459056 logs.go:282] 0 containers: []
	W0510 19:30:18.197920  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:18.197927  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:18.197985  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:18.236491  459056 cri.go:89] found id: ""
	I0510 19:30:18.236519  459056 logs.go:282] 0 containers: []
	W0510 19:30:18.236528  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:18.236535  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:18.236591  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:18.275316  459056 cri.go:89] found id: ""
	I0510 19:30:18.275355  459056 logs.go:282] 0 containers: []
	W0510 19:30:18.275368  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:18.275376  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:18.275447  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:18.314904  459056 cri.go:89] found id: ""
	I0510 19:30:18.314946  459056 logs.go:282] 0 containers: []
	W0510 19:30:18.314963  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:18.314972  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:18.315049  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:18.353877  459056 cri.go:89] found id: ""
	I0510 19:30:18.353906  459056 logs.go:282] 0 containers: []
	W0510 19:30:18.353924  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:18.353933  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:18.354019  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:18.391081  459056 cri.go:89] found id: ""
	I0510 19:30:18.391115  459056 logs.go:282] 0 containers: []
	W0510 19:30:18.391124  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:18.391131  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:18.391208  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:18.430112  459056 cri.go:89] found id: ""
	I0510 19:30:18.430151  459056 logs.go:282] 0 containers: []
	W0510 19:30:18.430165  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:18.430171  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:18.430241  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:18.467247  459056 cri.go:89] found id: ""
	I0510 19:30:18.467282  459056 logs.go:282] 0 containers: []
	W0510 19:30:18.467294  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:18.467307  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:18.467331  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:18.483013  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:18.483049  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:18.556404  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:18.556437  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:18.556457  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:18.634193  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:18.634242  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:18.677713  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:18.677752  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:21.230499  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:21.248397  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:21.248485  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:21.284922  459056 cri.go:89] found id: ""
	I0510 19:30:21.284961  459056 logs.go:282] 0 containers: []
	W0510 19:30:21.284974  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:21.284983  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:21.285062  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:21.323019  459056 cri.go:89] found id: ""
	I0510 19:30:21.323054  459056 logs.go:282] 0 containers: []
	W0510 19:30:21.323064  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:21.323071  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:21.323148  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:21.361809  459056 cri.go:89] found id: ""
	I0510 19:30:21.361838  459056 logs.go:282] 0 containers: []
	W0510 19:30:21.361846  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:21.361852  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:21.361930  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:21.399062  459056 cri.go:89] found id: ""
	I0510 19:30:21.399101  459056 logs.go:282] 0 containers: []
	W0510 19:30:21.399115  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:21.399124  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:21.399195  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:21.436027  459056 cri.go:89] found id: ""
	I0510 19:30:21.436061  459056 logs.go:282] 0 containers: []
	W0510 19:30:21.436071  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:21.436077  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:21.436143  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:21.481101  459056 cri.go:89] found id: ""
	I0510 19:30:21.481141  459056 logs.go:282] 0 containers: []
	W0510 19:30:21.481151  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:21.481158  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:21.481213  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:21.525918  459056 cri.go:89] found id: ""
	I0510 19:30:21.525949  459056 logs.go:282] 0 containers: []
	W0510 19:30:21.525958  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:21.525965  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:21.526051  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:21.566402  459056 cri.go:89] found id: ""
	I0510 19:30:21.566438  459056 logs.go:282] 0 containers: []
	W0510 19:30:21.566451  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:21.566466  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:21.566483  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:21.640295  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:21.640326  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:21.640344  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:21.723808  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:21.723860  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:21.787009  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:21.787053  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:21.846605  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:21.846653  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:24.365273  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:24.382257  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:24.382346  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:24.422109  459056 cri.go:89] found id: ""
	I0510 19:30:24.422145  459056 logs.go:282] 0 containers: []
	W0510 19:30:24.422154  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:24.422161  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:24.422223  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:24.461355  459056 cri.go:89] found id: ""
	I0510 19:30:24.461382  459056 logs.go:282] 0 containers: []
	W0510 19:30:24.461389  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:24.461395  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:24.461451  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:24.500168  459056 cri.go:89] found id: ""
	I0510 19:30:24.500203  459056 logs.go:282] 0 containers: []
	W0510 19:30:24.500214  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:24.500222  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:24.500293  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:24.535437  459056 cri.go:89] found id: ""
	I0510 19:30:24.535473  459056 logs.go:282] 0 containers: []
	W0510 19:30:24.535481  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:24.535487  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:24.535567  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:24.574226  459056 cri.go:89] found id: ""
	I0510 19:30:24.574262  459056 logs.go:282] 0 containers: []
	W0510 19:30:24.574274  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:24.574282  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:24.574353  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:24.611038  459056 cri.go:89] found id: ""
	I0510 19:30:24.611076  459056 logs.go:282] 0 containers: []
	W0510 19:30:24.611085  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:24.611094  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:24.611148  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:24.650255  459056 cri.go:89] found id: ""
	I0510 19:30:24.650291  459056 logs.go:282] 0 containers: []
	W0510 19:30:24.650303  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:24.650313  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:24.650382  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:24.688115  459056 cri.go:89] found id: ""
	I0510 19:30:24.688148  459056 logs.go:282] 0 containers: []
	W0510 19:30:24.688157  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:24.688166  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:24.688180  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:24.738142  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:24.738193  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:24.754027  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:24.754059  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:24.836221  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:24.836251  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:24.836270  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:24.911260  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:24.911306  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:27.453339  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:27.470837  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:27.470922  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:27.510141  459056 cri.go:89] found id: ""
	I0510 19:30:27.510171  459056 logs.go:282] 0 containers: []
	W0510 19:30:27.510180  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:27.510187  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:27.510245  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:27.560311  459056 cri.go:89] found id: ""
	I0510 19:30:27.560337  459056 logs.go:282] 0 containers: []
	W0510 19:30:27.560346  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:27.560352  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:27.560412  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:27.615618  459056 cri.go:89] found id: ""
	I0510 19:30:27.615648  459056 logs.go:282] 0 containers: []
	W0510 19:30:27.615658  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:27.615683  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:27.615745  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:27.663257  459056 cri.go:89] found id: ""
	I0510 19:30:27.663290  459056 logs.go:282] 0 containers: []
	W0510 19:30:27.663298  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:27.663305  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:27.663377  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:27.705815  459056 cri.go:89] found id: ""
	I0510 19:30:27.705856  459056 logs.go:282] 0 containers: []
	W0510 19:30:27.705864  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:27.705870  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:27.705932  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:27.744580  459056 cri.go:89] found id: ""
	I0510 19:30:27.744612  459056 logs.go:282] 0 containers: []
	W0510 19:30:27.744620  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:27.744637  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:27.744694  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:27.781041  459056 cri.go:89] found id: ""
	I0510 19:30:27.781070  459056 logs.go:282] 0 containers: []
	W0510 19:30:27.781078  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:27.781087  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:27.781145  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:27.818543  459056 cri.go:89] found id: ""
	I0510 19:30:27.818583  459056 logs.go:282] 0 containers: []
	W0510 19:30:27.818592  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:27.818603  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:27.818631  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:27.834004  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:27.834038  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:27.907944  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:27.907973  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:27.907991  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:27.988229  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:27.988276  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:28.032107  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:28.032141  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:30.581752  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:30.599095  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:30.599167  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:30.637772  459056 cri.go:89] found id: ""
	I0510 19:30:30.637804  459056 logs.go:282] 0 containers: []
	W0510 19:30:30.637815  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:30.637824  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:30.637894  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:30.674650  459056 cri.go:89] found id: ""
	I0510 19:30:30.674690  459056 logs.go:282] 0 containers: []
	W0510 19:30:30.674702  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:30.674709  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:30.674791  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:30.712335  459056 cri.go:89] found id: ""
	I0510 19:30:30.712370  459056 logs.go:282] 0 containers: []
	W0510 19:30:30.712379  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:30.712384  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:30.712457  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:30.749850  459056 cri.go:89] found id: ""
	I0510 19:30:30.749894  459056 logs.go:282] 0 containers: []
	W0510 19:30:30.749906  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:30.749914  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:30.750001  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:30.790937  459056 cri.go:89] found id: ""
	I0510 19:30:30.790976  459056 logs.go:282] 0 containers: []
	W0510 19:30:30.790985  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:30.790992  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:30.791048  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:30.830223  459056 cri.go:89] found id: ""
	I0510 19:30:30.830256  459056 logs.go:282] 0 containers: []
	W0510 19:30:30.830265  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:30.830271  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:30.830335  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:30.868658  459056 cri.go:89] found id: ""
	I0510 19:30:30.868685  459056 logs.go:282] 0 containers: []
	W0510 19:30:30.868693  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:30.868699  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:30.868755  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:30.908485  459056 cri.go:89] found id: ""
	I0510 19:30:30.908518  459056 logs.go:282] 0 containers: []
	W0510 19:30:30.908527  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:30.908537  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:30.908576  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:30.987890  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:30.987915  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:30.987930  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:31.066668  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:31.066724  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:31.114289  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:31.114322  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:31.168049  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:31.168101  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:33.685815  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:33.702996  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:33.703075  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:33.740679  459056 cri.go:89] found id: ""
	I0510 19:30:33.740710  459056 logs.go:282] 0 containers: []
	W0510 19:30:33.740718  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:33.740724  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:33.740789  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:33.778013  459056 cri.go:89] found id: ""
	I0510 19:30:33.778045  459056 logs.go:282] 0 containers: []
	W0510 19:30:33.778053  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:33.778059  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:33.778118  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:33.819601  459056 cri.go:89] found id: ""
	I0510 19:30:33.819634  459056 logs.go:282] 0 containers: []
	W0510 19:30:33.819643  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:33.819649  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:33.819719  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:33.858368  459056 cri.go:89] found id: ""
	I0510 19:30:33.858399  459056 logs.go:282] 0 containers: []
	W0510 19:30:33.858407  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:33.858414  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:33.858469  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:33.899175  459056 cri.go:89] found id: ""
	I0510 19:30:33.899210  459056 logs.go:282] 0 containers: []
	W0510 19:30:33.899219  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:33.899225  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:33.899297  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:33.938037  459056 cri.go:89] found id: ""
	I0510 19:30:33.938075  459056 logs.go:282] 0 containers: []
	W0510 19:30:33.938085  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:33.938092  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:33.938151  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:33.976364  459056 cri.go:89] found id: ""
	I0510 19:30:33.976398  459056 logs.go:282] 0 containers: []
	W0510 19:30:33.976408  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:33.976415  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:33.976474  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:34.019444  459056 cri.go:89] found id: ""
	I0510 19:30:34.019476  459056 logs.go:282] 0 containers: []
	W0510 19:30:34.019485  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:34.019496  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:34.019509  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:34.066863  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:34.066897  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:34.116346  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:34.116394  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:34.131809  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:34.131842  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:34.201228  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:34.201261  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:34.201278  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:36.784883  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:36.802185  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:36.802277  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:36.838342  459056 cri.go:89] found id: ""
	I0510 19:30:36.838382  459056 logs.go:282] 0 containers: []
	W0510 19:30:36.838395  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:36.838405  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:36.838484  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:36.875021  459056 cri.go:89] found id: ""
	I0510 19:30:36.875052  459056 logs.go:282] 0 containers: []
	W0510 19:30:36.875060  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:36.875066  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:36.875136  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:36.912550  459056 cri.go:89] found id: ""
	I0510 19:30:36.912579  459056 logs.go:282] 0 containers: []
	W0510 19:30:36.912589  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:36.912595  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:36.912672  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:36.953970  459056 cri.go:89] found id: ""
	I0510 19:30:36.954002  459056 logs.go:282] 0 containers: []
	W0510 19:30:36.954013  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:36.954021  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:36.954090  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:36.990198  459056 cri.go:89] found id: ""
	I0510 19:30:36.990227  459056 logs.go:282] 0 containers: []
	W0510 19:30:36.990236  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:36.990242  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:36.990315  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:37.026559  459056 cri.go:89] found id: ""
	I0510 19:30:37.026594  459056 logs.go:282] 0 containers: []
	W0510 19:30:37.026604  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:37.026612  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:37.026696  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:37.063080  459056 cri.go:89] found id: ""
	I0510 19:30:37.063112  459056 logs.go:282] 0 containers: []
	W0510 19:30:37.063120  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:37.063127  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:37.063181  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:37.099746  459056 cri.go:89] found id: ""
	I0510 19:30:37.099786  459056 logs.go:282] 0 containers: []
	W0510 19:30:37.099800  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:37.099814  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:37.099831  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:37.150884  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:37.150932  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:37.166536  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:37.166568  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:37.241013  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:37.241045  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:37.241062  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:37.319328  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:37.319370  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:39.863629  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:39.881255  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:39.881331  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:39.921099  459056 cri.go:89] found id: ""
	I0510 19:30:39.921128  459056 logs.go:282] 0 containers: []
	W0510 19:30:39.921136  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:39.921142  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:39.921208  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:39.958588  459056 cri.go:89] found id: ""
	I0510 19:30:39.958620  459056 logs.go:282] 0 containers: []
	W0510 19:30:39.958629  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:39.958634  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:39.958701  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:39.995129  459056 cri.go:89] found id: ""
	I0510 19:30:39.995160  459056 logs.go:282] 0 containers: []
	W0510 19:30:39.995168  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:39.995174  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:39.995230  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:40.031278  459056 cri.go:89] found id: ""
	I0510 19:30:40.031308  459056 logs.go:282] 0 containers: []
	W0510 19:30:40.031320  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:40.031328  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:40.031399  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:40.069662  459056 cri.go:89] found id: ""
	I0510 19:30:40.069694  459056 logs.go:282] 0 containers: []
	W0510 19:30:40.069703  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:40.069708  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:40.069769  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:40.106418  459056 cri.go:89] found id: ""
	I0510 19:30:40.106452  459056 logs.go:282] 0 containers: []
	W0510 19:30:40.106464  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:40.106474  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:40.106546  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:40.143694  459056 cri.go:89] found id: ""
	I0510 19:30:40.143728  459056 logs.go:282] 0 containers: []
	W0510 19:30:40.143737  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:40.143743  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:40.143812  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:40.178265  459056 cri.go:89] found id: ""
	I0510 19:30:40.178296  459056 logs.go:282] 0 containers: []
	W0510 19:30:40.178304  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:40.178314  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:40.178328  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:40.247907  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:40.247940  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:40.247959  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:40.321933  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:40.321985  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:40.368947  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:40.368991  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:40.419749  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:40.419791  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:42.936834  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:42.954258  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:42.954332  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:42.991570  459056 cri.go:89] found id: ""
	I0510 19:30:42.991603  459056 logs.go:282] 0 containers: []
	W0510 19:30:42.991611  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:42.991617  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:42.991685  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:43.029718  459056 cri.go:89] found id: ""
	I0510 19:30:43.029751  459056 logs.go:282] 0 containers: []
	W0510 19:30:43.029759  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:43.029766  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:43.029824  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:43.068297  459056 cri.go:89] found id: ""
	I0510 19:30:43.068328  459056 logs.go:282] 0 containers: []
	W0510 19:30:43.068335  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:43.068342  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:43.068405  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:43.109805  459056 cri.go:89] found id: ""
	I0510 19:30:43.109833  459056 logs.go:282] 0 containers: []
	W0510 19:30:43.109841  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:43.109847  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:43.109900  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:43.148109  459056 cri.go:89] found id: ""
	I0510 19:30:43.148141  459056 logs.go:282] 0 containers: []
	W0510 19:30:43.148149  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:43.148156  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:43.148224  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:43.185187  459056 cri.go:89] found id: ""
	I0510 19:30:43.185221  459056 logs.go:282] 0 containers: []
	W0510 19:30:43.185230  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:43.185239  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:43.185293  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:43.224447  459056 cri.go:89] found id: ""
	I0510 19:30:43.224476  459056 logs.go:282] 0 containers: []
	W0510 19:30:43.224485  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:43.224496  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:43.224552  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:43.268442  459056 cri.go:89] found id: ""
	I0510 19:30:43.268471  459056 logs.go:282] 0 containers: []
	W0510 19:30:43.268480  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:43.268489  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:43.268501  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:43.347249  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:43.347282  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:43.347307  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:43.427928  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:43.427975  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:43.473221  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:43.473258  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:43.522748  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:43.522796  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:46.040289  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:46.058969  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:46.059051  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:46.102709  459056 cri.go:89] found id: ""
	I0510 19:30:46.102757  459056 logs.go:282] 0 containers: []
	W0510 19:30:46.102775  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:46.102786  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:46.102848  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:46.146551  459056 cri.go:89] found id: ""
	I0510 19:30:46.146584  459056 logs.go:282] 0 containers: []
	W0510 19:30:46.146593  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:46.146599  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:46.146670  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:46.187445  459056 cri.go:89] found id: ""
	I0510 19:30:46.187484  459056 logs.go:282] 0 containers: []
	W0510 19:30:46.187498  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:46.187505  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:46.187575  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:46.224647  459056 cri.go:89] found id: ""
	I0510 19:30:46.224686  459056 logs.go:282] 0 containers: []
	W0510 19:30:46.224697  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:46.224706  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:46.224786  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:46.263513  459056 cri.go:89] found id: ""
	I0510 19:30:46.263545  459056 logs.go:282] 0 containers: []
	W0510 19:30:46.263554  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:46.263560  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:46.263639  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:46.300049  459056 cri.go:89] found id: ""
	I0510 19:30:46.300085  459056 logs.go:282] 0 containers: []
	W0510 19:30:46.300096  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:46.300104  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:46.300174  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:46.337107  459056 cri.go:89] found id: ""
	I0510 19:30:46.337139  459056 logs.go:282] 0 containers: []
	W0510 19:30:46.337150  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:46.337159  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:46.337219  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:46.373699  459056 cri.go:89] found id: ""
	I0510 19:30:46.373736  459056 logs.go:282] 0 containers: []
	W0510 19:30:46.373748  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:46.373761  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:46.373777  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:46.425713  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:46.425764  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:46.441565  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:46.441602  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:46.517861  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:46.517897  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:46.517918  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:46.601755  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:46.601807  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:49.147704  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:49.165325  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:49.165397  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:49.206272  459056 cri.go:89] found id: ""
	I0510 19:30:49.206309  459056 logs.go:282] 0 containers: []
	W0510 19:30:49.206318  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:49.206324  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:49.206385  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:49.241832  459056 cri.go:89] found id: ""
	I0510 19:30:49.241863  459056 logs.go:282] 0 containers: []
	W0510 19:30:49.241871  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:49.241878  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:49.241958  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:49.280474  459056 cri.go:89] found id: ""
	I0510 19:30:49.280505  459056 logs.go:282] 0 containers: []
	W0510 19:30:49.280514  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:49.280520  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:49.280577  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:49.317656  459056 cri.go:89] found id: ""
	I0510 19:30:49.317687  459056 logs.go:282] 0 containers: []
	W0510 19:30:49.317699  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:49.317718  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:49.317789  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:49.356059  459056 cri.go:89] found id: ""
	I0510 19:30:49.356094  459056 logs.go:282] 0 containers: []
	W0510 19:30:49.356102  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:49.356112  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:49.356169  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:49.396831  459056 cri.go:89] found id: ""
	I0510 19:30:49.396864  459056 logs.go:282] 0 containers: []
	W0510 19:30:49.396877  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:49.396885  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:49.396954  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:49.433301  459056 cri.go:89] found id: ""
	I0510 19:30:49.433328  459056 logs.go:282] 0 containers: []
	W0510 19:30:49.433336  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:49.433342  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:49.433416  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:49.470642  459056 cri.go:89] found id: ""
	I0510 19:30:49.470674  459056 logs.go:282] 0 containers: []
	W0510 19:30:49.470686  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:49.470698  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:49.470715  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:49.520867  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:49.520910  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:49.536370  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:49.536406  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:49.608860  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:49.608894  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:49.608913  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:49.687344  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:49.687395  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:52.231133  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:52.248456  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:52.248550  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:52.288902  459056 cri.go:89] found id: ""
	I0510 19:30:52.288960  459056 logs.go:282] 0 containers: []
	W0510 19:30:52.288973  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:52.288982  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:52.289062  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:52.326578  459056 cri.go:89] found id: ""
	I0510 19:30:52.326611  459056 logs.go:282] 0 containers: []
	W0510 19:30:52.326626  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:52.326634  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:52.326713  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:52.368627  459056 cri.go:89] found id: ""
	I0510 19:30:52.368657  459056 logs.go:282] 0 containers: []
	W0510 19:30:52.368666  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:52.368672  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:52.368754  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:52.406483  459056 cri.go:89] found id: ""
	I0510 19:30:52.406518  459056 logs.go:282] 0 containers: []
	W0510 19:30:52.406526  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:52.406533  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:52.406599  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:52.445770  459056 cri.go:89] found id: ""
	I0510 19:30:52.445805  459056 logs.go:282] 0 containers: []
	W0510 19:30:52.445816  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:52.445826  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:52.445898  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:52.484279  459056 cri.go:89] found id: ""
	I0510 19:30:52.484315  459056 logs.go:282] 0 containers: []
	W0510 19:30:52.484325  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:52.484332  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:52.484395  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:52.523564  459056 cri.go:89] found id: ""
	I0510 19:30:52.523601  459056 logs.go:282] 0 containers: []
	W0510 19:30:52.523628  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:52.523634  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:52.523701  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:52.566712  459056 cri.go:89] found id: ""
	I0510 19:30:52.566747  459056 logs.go:282] 0 containers: []
	W0510 19:30:52.566756  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:52.566768  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:52.566784  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:52.618210  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:52.618263  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:52.635481  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:52.635518  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:52.710370  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:52.710415  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:52.710435  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:52.789902  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:52.789960  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:55.334697  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:55.351738  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:55.351815  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:55.387464  459056 cri.go:89] found id: ""
	I0510 19:30:55.387493  459056 logs.go:282] 0 containers: []
	W0510 19:30:55.387503  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:55.387512  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:55.387578  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:55.424565  459056 cri.go:89] found id: ""
	I0510 19:30:55.424597  459056 logs.go:282] 0 containers: []
	W0510 19:30:55.424608  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:55.424617  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:55.424690  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:55.461558  459056 cri.go:89] found id: ""
	I0510 19:30:55.461597  459056 logs.go:282] 0 containers: []
	W0510 19:30:55.461608  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:55.461616  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:55.461689  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:55.500713  459056 cri.go:89] found id: ""
	I0510 19:30:55.500742  459056 logs.go:282] 0 containers: []
	W0510 19:30:55.500756  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:55.500763  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:55.500826  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:55.536104  459056 cri.go:89] found id: ""
	I0510 19:30:55.536132  459056 logs.go:282] 0 containers: []
	W0510 19:30:55.536141  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:55.536147  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:55.536206  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:55.571895  459056 cri.go:89] found id: ""
	I0510 19:30:55.571924  459056 logs.go:282] 0 containers: []
	W0510 19:30:55.571932  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:55.571938  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:55.571996  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:55.610794  459056 cri.go:89] found id: ""
	I0510 19:30:55.610822  459056 logs.go:282] 0 containers: []
	W0510 19:30:55.610831  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:55.610837  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:55.610904  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:55.647514  459056 cri.go:89] found id: ""
	I0510 19:30:55.647544  459056 logs.go:282] 0 containers: []
	W0510 19:30:55.647554  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:55.647563  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:55.647578  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:55.697745  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:55.697788  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:55.714126  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:55.714161  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:55.786711  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:55.786735  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:55.786749  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:55.863002  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:55.863049  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:30:58.428393  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:30:58.446138  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:30:58.446216  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:30:58.482821  459056 cri.go:89] found id: ""
	I0510 19:30:58.482856  459056 logs.go:282] 0 containers: []
	W0510 19:30:58.482872  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:30:58.482880  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:30:58.482939  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:30:58.524325  459056 cri.go:89] found id: ""
	I0510 19:30:58.524358  459056 logs.go:282] 0 containers: []
	W0510 19:30:58.524369  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:30:58.524377  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:30:58.524433  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:30:58.564327  459056 cri.go:89] found id: ""
	I0510 19:30:58.564366  459056 logs.go:282] 0 containers: []
	W0510 19:30:58.564377  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:30:58.564383  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:30:58.564439  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:30:58.602937  459056 cri.go:89] found id: ""
	I0510 19:30:58.602966  459056 logs.go:282] 0 containers: []
	W0510 19:30:58.602974  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:30:58.602981  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:30:58.603038  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:30:58.639820  459056 cri.go:89] found id: ""
	I0510 19:30:58.639852  459056 logs.go:282] 0 containers: []
	W0510 19:30:58.639863  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:30:58.639871  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:30:58.639963  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:30:58.676466  459056 cri.go:89] found id: ""
	I0510 19:30:58.676503  459056 logs.go:282] 0 containers: []
	W0510 19:30:58.676515  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:30:58.676524  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:30:58.676593  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:30:58.712669  459056 cri.go:89] found id: ""
	I0510 19:30:58.712706  459056 logs.go:282] 0 containers: []
	W0510 19:30:58.712715  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:30:58.712721  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:30:58.712797  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:30:58.748436  459056 cri.go:89] found id: ""
	I0510 19:30:58.748474  459056 logs.go:282] 0 containers: []
	W0510 19:30:58.748485  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:30:58.748496  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:30:58.748513  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:30:58.801263  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:30:58.801311  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:30:58.816908  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:30:58.816945  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:30:58.890881  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:30:58.890912  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:30:58.890932  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:30:58.969061  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:30:58.969113  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:01.513933  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:01.531492  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:01.531565  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:01.568296  459056 cri.go:89] found id: ""
	I0510 19:31:01.568324  459056 logs.go:282] 0 containers: []
	W0510 19:31:01.568333  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:01.568340  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:01.568396  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:01.610372  459056 cri.go:89] found id: ""
	I0510 19:31:01.610406  459056 logs.go:282] 0 containers: []
	W0510 19:31:01.610415  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:01.610421  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:01.610485  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:01.648652  459056 cri.go:89] found id: ""
	I0510 19:31:01.648682  459056 logs.go:282] 0 containers: []
	W0510 19:31:01.648690  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:01.648696  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:01.648751  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:01.686551  459056 cri.go:89] found id: ""
	I0510 19:31:01.686583  459056 logs.go:282] 0 containers: []
	W0510 19:31:01.686595  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:01.686604  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:01.686694  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:01.724202  459056 cri.go:89] found id: ""
	I0510 19:31:01.724243  459056 logs.go:282] 0 containers: []
	W0510 19:31:01.724255  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:01.724261  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:01.724337  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:01.763500  459056 cri.go:89] found id: ""
	I0510 19:31:01.763534  459056 logs.go:282] 0 containers: []
	W0510 19:31:01.763544  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:01.763550  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:01.763629  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:01.808280  459056 cri.go:89] found id: ""
	I0510 19:31:01.808312  459056 logs.go:282] 0 containers: []
	W0510 19:31:01.808324  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:01.808332  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:01.808403  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:01.843980  459056 cri.go:89] found id: ""
	I0510 19:31:01.844018  459056 logs.go:282] 0 containers: []
	W0510 19:31:01.844031  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:01.844044  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:01.844061  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:01.907482  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:01.907521  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:01.922645  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:01.922683  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:01.999977  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:02.000009  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:02.000031  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:02.078872  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:02.078920  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:04.624201  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:04.641739  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:04.641818  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:04.680796  459056 cri.go:89] found id: ""
	I0510 19:31:04.680825  459056 logs.go:282] 0 containers: []
	W0510 19:31:04.680833  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:04.680839  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:04.680893  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:04.718840  459056 cri.go:89] found id: ""
	I0510 19:31:04.718867  459056 logs.go:282] 0 containers: []
	W0510 19:31:04.718874  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:04.718880  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:04.718943  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:04.753687  459056 cri.go:89] found id: ""
	I0510 19:31:04.753726  459056 logs.go:282] 0 containers: []
	W0510 19:31:04.753737  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:04.753745  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:04.753815  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:04.790863  459056 cri.go:89] found id: ""
	I0510 19:31:04.790893  459056 logs.go:282] 0 containers: []
	W0510 19:31:04.790903  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:04.790910  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:04.790969  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:04.828293  459056 cri.go:89] found id: ""
	I0510 19:31:04.828321  459056 logs.go:282] 0 containers: []
	W0510 19:31:04.828329  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:04.828335  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:04.828400  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:04.865914  459056 cri.go:89] found id: ""
	I0510 19:31:04.865955  459056 logs.go:282] 0 containers: []
	W0510 19:31:04.865964  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:04.865970  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:04.866025  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:04.902834  459056 cri.go:89] found id: ""
	I0510 19:31:04.902866  459056 logs.go:282] 0 containers: []
	W0510 19:31:04.902879  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:04.902888  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:04.902960  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:04.939660  459056 cri.go:89] found id: ""
	I0510 19:31:04.939694  459056 logs.go:282] 0 containers: []
	W0510 19:31:04.939702  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:04.939711  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:04.939729  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:04.954569  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:04.954608  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:05.026998  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:05.027024  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:05.027041  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:05.111468  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:05.111520  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:05.155909  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:05.155953  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:07.709153  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:07.726572  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:07.726671  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:07.766663  459056 cri.go:89] found id: ""
	I0510 19:31:07.766691  459056 logs.go:282] 0 containers: []
	W0510 19:31:07.766703  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:07.766712  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:07.766909  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:07.806853  459056 cri.go:89] found id: ""
	I0510 19:31:07.806902  459056 logs.go:282] 0 containers: []
	W0510 19:31:07.806911  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:07.806917  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:07.806985  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:07.845188  459056 cri.go:89] found id: ""
	I0510 19:31:07.845218  459056 logs.go:282] 0 containers: []
	W0510 19:31:07.845227  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:07.845233  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:07.845291  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:07.884790  459056 cri.go:89] found id: ""
	I0510 19:31:07.884827  459056 logs.go:282] 0 containers: []
	W0510 19:31:07.884840  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:07.884847  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:07.884919  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:07.924161  459056 cri.go:89] found id: ""
	I0510 19:31:07.924195  459056 logs.go:282] 0 containers: []
	W0510 19:31:07.924206  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:07.924222  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:07.924288  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:07.962697  459056 cri.go:89] found id: ""
	I0510 19:31:07.962724  459056 logs.go:282] 0 containers: []
	W0510 19:31:07.962735  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:07.962744  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:07.962840  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:08.001266  459056 cri.go:89] found id: ""
	I0510 19:31:08.001306  459056 logs.go:282] 0 containers: []
	W0510 19:31:08.001318  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:08.001326  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:08.001418  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:08.040211  459056 cri.go:89] found id: ""
	I0510 19:31:08.040238  459056 logs.go:282] 0 containers: []
	W0510 19:31:08.040247  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:08.040255  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:08.040272  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:08.114738  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:08.114784  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:08.114802  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:08.188677  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:08.188725  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:08.232875  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:08.232908  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:08.293039  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:08.293095  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:10.811640  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:10.828942  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:10.829017  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:10.866960  459056 cri.go:89] found id: ""
	I0510 19:31:10.866993  459056 logs.go:282] 0 containers: []
	W0510 19:31:10.867003  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:10.867009  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:10.867066  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:10.906391  459056 cri.go:89] found id: ""
	I0510 19:31:10.906421  459056 logs.go:282] 0 containers: []
	W0510 19:31:10.906430  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:10.906436  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:10.906503  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:10.947062  459056 cri.go:89] found id: ""
	I0510 19:31:10.947091  459056 logs.go:282] 0 containers: []
	W0510 19:31:10.947100  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:10.947106  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:10.947172  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:10.984506  459056 cri.go:89] found id: ""
	I0510 19:31:10.984535  459056 logs.go:282] 0 containers: []
	W0510 19:31:10.984543  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:10.984549  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:10.984613  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:11.022676  459056 cri.go:89] found id: ""
	I0510 19:31:11.022715  459056 logs.go:282] 0 containers: []
	W0510 19:31:11.022724  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:11.022730  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:11.022805  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:11.067215  459056 cri.go:89] found id: ""
	I0510 19:31:11.067260  459056 logs.go:282] 0 containers: []
	W0510 19:31:11.067273  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:11.067282  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:11.067344  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:11.106883  459056 cri.go:89] found id: ""
	I0510 19:31:11.106912  459056 logs.go:282] 0 containers: []
	W0510 19:31:11.106920  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:11.106926  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:11.106984  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:11.148375  459056 cri.go:89] found id: ""
	I0510 19:31:11.148408  459056 logs.go:282] 0 containers: []
	W0510 19:31:11.148416  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:11.148426  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:11.148441  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:11.199507  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:11.199555  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:11.215477  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:11.215509  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:11.285250  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:11.285278  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:11.285292  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:11.365666  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:11.365724  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:13.914500  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:13.931769  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:13.931843  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:13.971450  459056 cri.go:89] found id: ""
	I0510 19:31:13.971481  459056 logs.go:282] 0 containers: []
	W0510 19:31:13.971491  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:13.971503  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:13.971585  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:14.016556  459056 cri.go:89] found id: ""
	I0510 19:31:14.016603  459056 logs.go:282] 0 containers: []
	W0510 19:31:14.016615  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:14.016624  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:14.016717  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:14.067360  459056 cri.go:89] found id: ""
	I0510 19:31:14.067395  459056 logs.go:282] 0 containers: []
	W0510 19:31:14.067406  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:14.067415  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:14.067490  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:14.115508  459056 cri.go:89] found id: ""
	I0510 19:31:14.115547  459056 logs.go:282] 0 containers: []
	W0510 19:31:14.115559  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:14.115566  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:14.115653  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:14.162589  459056 cri.go:89] found id: ""
	I0510 19:31:14.162620  459056 logs.go:282] 0 containers: []
	W0510 19:31:14.162629  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:14.162635  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:14.162720  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:14.203802  459056 cri.go:89] found id: ""
	I0510 19:31:14.203842  459056 logs.go:282] 0 containers: []
	W0510 19:31:14.203853  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:14.203861  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:14.203927  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:14.242404  459056 cri.go:89] found id: ""
	I0510 19:31:14.242440  459056 logs.go:282] 0 containers: []
	W0510 19:31:14.242449  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:14.242455  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:14.242526  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:14.279788  459056 cri.go:89] found id: ""
	I0510 19:31:14.279820  459056 logs.go:282] 0 containers: []
	W0510 19:31:14.279831  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:14.279843  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:14.279861  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:14.295706  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:14.295741  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:14.369637  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:14.369665  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:14.369684  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:14.445062  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:14.445113  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:14.488659  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:14.488692  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:17.042803  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:17.060263  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:17.060348  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:17.098561  459056 cri.go:89] found id: ""
	I0510 19:31:17.098588  459056 logs.go:282] 0 containers: []
	W0510 19:31:17.098597  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:17.098602  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:17.098666  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:17.136124  459056 cri.go:89] found id: ""
	I0510 19:31:17.136155  459056 logs.go:282] 0 containers: []
	W0510 19:31:17.136163  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:17.136169  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:17.136226  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:17.174746  459056 cri.go:89] found id: ""
	I0510 19:31:17.174773  459056 logs.go:282] 0 containers: []
	W0510 19:31:17.174781  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:17.174788  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:17.174853  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:17.211764  459056 cri.go:89] found id: ""
	I0510 19:31:17.211802  459056 logs.go:282] 0 containers: []
	W0510 19:31:17.211813  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:17.211822  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:17.211893  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:17.250173  459056 cri.go:89] found id: ""
	I0510 19:31:17.250220  459056 logs.go:282] 0 containers: []
	W0510 19:31:17.250231  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:17.250240  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:17.250307  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:17.288067  459056 cri.go:89] found id: ""
	I0510 19:31:17.288098  459056 logs.go:282] 0 containers: []
	W0510 19:31:17.288106  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:17.288113  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:17.288167  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:17.332174  459056 cri.go:89] found id: ""
	I0510 19:31:17.332201  459056 logs.go:282] 0 containers: []
	W0510 19:31:17.332210  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:17.332215  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:17.332279  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:17.368361  459056 cri.go:89] found id: ""
	I0510 19:31:17.368393  459056 logs.go:282] 0 containers: []
	W0510 19:31:17.368401  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:17.368414  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:17.368431  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:17.419140  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:17.419188  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:17.435060  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:17.435092  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:17.503946  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:17.503971  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:17.503985  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:17.577584  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:17.577636  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:20.122561  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:20.140245  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:20.140318  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:20.176963  459056 cri.go:89] found id: ""
	I0510 19:31:20.176997  459056 logs.go:282] 0 containers: []
	W0510 19:31:20.177006  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:20.177014  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:20.177082  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:20.214648  459056 cri.go:89] found id: ""
	I0510 19:31:20.214686  459056 logs.go:282] 0 containers: []
	W0510 19:31:20.214694  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:20.214700  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:20.214756  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:20.252572  459056 cri.go:89] found id: ""
	I0510 19:31:20.252603  459056 logs.go:282] 0 containers: []
	W0510 19:31:20.252610  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:20.252616  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:20.252690  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:20.292626  459056 cri.go:89] found id: ""
	I0510 19:31:20.292658  459056 logs.go:282] 0 containers: []
	W0510 19:31:20.292667  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:20.292673  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:20.292731  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:20.331394  459056 cri.go:89] found id: ""
	I0510 19:31:20.331426  459056 logs.go:282] 0 containers: []
	W0510 19:31:20.331433  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:20.331440  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:20.331493  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:20.369499  459056 cri.go:89] found id: ""
	I0510 19:31:20.369526  459056 logs.go:282] 0 containers: []
	W0510 19:31:20.369534  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:20.369541  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:20.369598  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:20.409063  459056 cri.go:89] found id: ""
	I0510 19:31:20.409101  459056 logs.go:282] 0 containers: []
	W0510 19:31:20.409119  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:20.409129  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:20.409202  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:20.448127  459056 cri.go:89] found id: ""
	I0510 19:31:20.448165  459056 logs.go:282] 0 containers: []
	W0510 19:31:20.448176  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:20.448192  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:20.448217  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:20.529717  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:20.529761  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:20.572287  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:20.572324  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:20.622908  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:20.622953  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:20.638966  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:20.639001  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:20.710197  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:23.211978  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:23.228993  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:23.229066  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:23.266521  459056 cri.go:89] found id: ""
	I0510 19:31:23.266554  459056 logs.go:282] 0 containers: []
	W0510 19:31:23.266563  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:23.266570  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:23.266624  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:23.305315  459056 cri.go:89] found id: ""
	I0510 19:31:23.305348  459056 logs.go:282] 0 containers: []
	W0510 19:31:23.305362  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:23.305371  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:23.305428  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:23.353734  459056 cri.go:89] found id: ""
	I0510 19:31:23.353764  459056 logs.go:282] 0 containers: []
	W0510 19:31:23.353773  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:23.353779  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:23.353836  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:23.392351  459056 cri.go:89] found id: ""
	I0510 19:31:23.392389  459056 logs.go:282] 0 containers: []
	W0510 19:31:23.392400  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:23.392408  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:23.392481  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:23.432302  459056 cri.go:89] found id: ""
	I0510 19:31:23.432338  459056 logs.go:282] 0 containers: []
	W0510 19:31:23.432349  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:23.432357  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:23.432423  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:23.470143  459056 cri.go:89] found id: ""
	I0510 19:31:23.470171  459056 logs.go:282] 0 containers: []
	W0510 19:31:23.470178  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:23.470184  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:23.470240  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:23.510123  459056 cri.go:89] found id: ""
	I0510 19:31:23.510151  459056 logs.go:282] 0 containers: []
	W0510 19:31:23.510158  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:23.510164  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:23.510218  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:23.548111  459056 cri.go:89] found id: ""
	I0510 19:31:23.548146  459056 logs.go:282] 0 containers: []
	W0510 19:31:23.548155  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:23.548165  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:23.548177  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:23.592214  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:23.592252  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:23.644384  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:23.644431  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:23.660004  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:23.660050  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:23.737601  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:23.737630  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:23.737646  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:26.318790  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:26.335345  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:26.335418  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:26.374890  459056 cri.go:89] found id: ""
	I0510 19:31:26.374925  459056 logs.go:282] 0 containers: []
	W0510 19:31:26.374939  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:26.374949  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:26.375022  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:26.416223  459056 cri.go:89] found id: ""
	I0510 19:31:26.416256  459056 logs.go:282] 0 containers: []
	W0510 19:31:26.416269  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:26.416279  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:26.416360  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:26.455431  459056 cri.go:89] found id: ""
	I0510 19:31:26.455472  459056 logs.go:282] 0 containers: []
	W0510 19:31:26.455485  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:26.455493  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:26.455563  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:26.493542  459056 cri.go:89] found id: ""
	I0510 19:31:26.493569  459056 logs.go:282] 0 containers: []
	W0510 19:31:26.493579  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:26.493588  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:26.493657  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:26.536613  459056 cri.go:89] found id: ""
	I0510 19:31:26.536642  459056 logs.go:282] 0 containers: []
	W0510 19:31:26.536651  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:26.536657  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:26.536742  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:26.574555  459056 cri.go:89] found id: ""
	I0510 19:31:26.574589  459056 logs.go:282] 0 containers: []
	W0510 19:31:26.574601  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:26.574610  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:26.574686  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:26.615726  459056 cri.go:89] found id: ""
	I0510 19:31:26.615767  459056 logs.go:282] 0 containers: []
	W0510 19:31:26.615779  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:26.615794  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:26.616130  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:26.658332  459056 cri.go:89] found id: ""
	I0510 19:31:26.658364  459056 logs.go:282] 0 containers: []
	W0510 19:31:26.658373  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:26.658382  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:26.658397  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:26.714050  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:26.714103  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:26.729247  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:26.729283  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:26.802056  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:26.802098  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:26.802117  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:26.880723  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:26.880777  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:29.424963  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:29.442400  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:29.442471  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:29.480974  459056 cri.go:89] found id: ""
	I0510 19:31:29.481014  459056 logs.go:282] 0 containers: []
	W0510 19:31:29.481025  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:29.481032  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:29.481103  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:29.517132  459056 cri.go:89] found id: ""
	I0510 19:31:29.517178  459056 logs.go:282] 0 containers: []
	W0510 19:31:29.517190  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:29.517199  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:29.517271  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:29.555573  459056 cri.go:89] found id: ""
	I0510 19:31:29.555610  459056 logs.go:282] 0 containers: []
	W0510 19:31:29.555621  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:29.555629  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:29.555706  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:29.591136  459056 cri.go:89] found id: ""
	I0510 19:31:29.591168  459056 logs.go:282] 0 containers: []
	W0510 19:31:29.591175  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:29.591181  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:29.591249  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:29.629174  459056 cri.go:89] found id: ""
	I0510 19:31:29.629205  459056 logs.go:282] 0 containers: []
	W0510 19:31:29.629214  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:29.629220  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:29.629285  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:29.666035  459056 cri.go:89] found id: ""
	I0510 19:31:29.666067  459056 logs.go:282] 0 containers: []
	W0510 19:31:29.666075  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:29.666081  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:29.666140  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:29.705842  459056 cri.go:89] found id: ""
	I0510 19:31:29.705872  459056 logs.go:282] 0 containers: []
	W0510 19:31:29.705880  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:29.705886  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:29.705964  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:29.743559  459056 cri.go:89] found id: ""
	I0510 19:31:29.743592  459056 logs.go:282] 0 containers: []
	W0510 19:31:29.743600  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:29.743623  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:29.743637  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:29.792453  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:29.792499  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:29.807725  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:29.807765  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:29.881784  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:29.881812  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:29.881825  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:29.954965  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:29.955014  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:32.502586  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:32.520169  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:32.520239  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:32.557308  459056 cri.go:89] found id: ""
	I0510 19:31:32.557342  459056 logs.go:282] 0 containers: []
	W0510 19:31:32.557350  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:32.557356  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:32.557411  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:32.595792  459056 cri.go:89] found id: ""
	I0510 19:31:32.595822  459056 logs.go:282] 0 containers: []
	W0510 19:31:32.595830  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:32.595835  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:32.595891  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:32.634389  459056 cri.go:89] found id: ""
	I0510 19:31:32.634429  459056 logs.go:282] 0 containers: []
	W0510 19:31:32.634437  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:32.634443  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:32.634517  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:32.675925  459056 cri.go:89] found id: ""
	I0510 19:31:32.675957  459056 logs.go:282] 0 containers: []
	W0510 19:31:32.675966  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:32.675973  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:32.676027  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:32.712730  459056 cri.go:89] found id: ""
	I0510 19:31:32.712767  459056 logs.go:282] 0 containers: []
	W0510 19:31:32.712776  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:32.712782  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:32.712843  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:32.749733  459056 cri.go:89] found id: ""
	I0510 19:31:32.749765  459056 logs.go:282] 0 containers: []
	W0510 19:31:32.749774  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:32.749781  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:32.749841  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:32.789481  459056 cri.go:89] found id: ""
	I0510 19:31:32.789513  459056 logs.go:282] 0 containers: []
	W0510 19:31:32.789521  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:32.789527  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:32.789586  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:32.828742  459056 cri.go:89] found id: ""
	I0510 19:31:32.828779  459056 logs.go:282] 0 containers: []
	W0510 19:31:32.828788  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:32.828798  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:32.828822  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:32.843753  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:32.843787  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:32.912953  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:32.912982  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:32.912995  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:32.989726  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:32.989770  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:33.040906  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:33.040943  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:35.593878  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:35.612402  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:31:35.612506  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:31:35.651532  459056 cri.go:89] found id: ""
	I0510 19:31:35.651562  459056 logs.go:282] 0 containers: []
	W0510 19:31:35.651571  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:31:35.651579  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:31:35.651671  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:31:35.689499  459056 cri.go:89] found id: ""
	I0510 19:31:35.689530  459056 logs.go:282] 0 containers: []
	W0510 19:31:35.689539  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:31:35.689546  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:31:35.689611  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:31:35.729195  459056 cri.go:89] found id: ""
	I0510 19:31:35.729230  459056 logs.go:282] 0 containers: []
	W0510 19:31:35.729239  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:31:35.729245  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:31:35.729314  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:31:35.767099  459056 cri.go:89] found id: ""
	I0510 19:31:35.767133  459056 logs.go:282] 0 containers: []
	W0510 19:31:35.767146  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:31:35.767151  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:31:35.767208  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:31:35.808130  459056 cri.go:89] found id: ""
	I0510 19:31:35.808166  459056 logs.go:282] 0 containers: []
	W0510 19:31:35.808179  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:31:35.808187  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:31:35.808261  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:31:35.845791  459056 cri.go:89] found id: ""
	I0510 19:31:35.845824  459056 logs.go:282] 0 containers: []
	W0510 19:31:35.845834  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:31:35.845841  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:31:35.846005  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:31:35.884049  459056 cri.go:89] found id: ""
	I0510 19:31:35.884083  459056 logs.go:282] 0 containers: []
	W0510 19:31:35.884093  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:31:35.884101  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:31:35.884182  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:31:35.921358  459056 cri.go:89] found id: ""
	I0510 19:31:35.921405  459056 logs.go:282] 0 containers: []
	W0510 19:31:35.921438  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:31:35.921454  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:31:35.921471  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:31:35.975819  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:31:35.975866  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:31:35.991683  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:31:35.991719  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:31:36.062576  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:31:36.062609  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:31:36.062692  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:31:36.144124  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:31:36.144171  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0510 19:31:38.688627  459056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 19:31:38.706961  459056 kubeadm.go:593] duration metric: took 4m1.80853031s to restartPrimaryControlPlane
	W0510 19:31:38.707088  459056 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0510 19:31:38.707129  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0510 19:31:42.433199  459056 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.726037031s)
	I0510 19:31:42.433304  459056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0510 19:31:42.450520  459056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0510 19:31:42.464170  459056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0510 19:31:42.478440  459056 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0510 19:31:42.478465  459056 kubeadm.go:157] found existing configuration files:
	
	I0510 19:31:42.478527  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0510 19:31:42.490756  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0510 19:31:42.490825  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0510 19:31:42.503476  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0510 19:31:42.516078  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0510 19:31:42.516162  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0510 19:31:42.529093  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0510 19:31:42.541784  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0510 19:31:42.541857  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0510 19:31:42.554154  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0510 19:31:42.566298  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0510 19:31:42.566366  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0510 19:31:42.579144  459056 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0510 19:31:42.808604  459056 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0510 19:33:39.237462  459056 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0510 19:33:39.237653  459056 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0510 19:33:39.240214  459056 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0510 19:33:39.240284  459056 kubeadm.go:310] [preflight] Running pre-flight checks
	I0510 19:33:39.240378  459056 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0510 19:33:39.240505  459056 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0510 19:33:39.240669  459056 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0510 19:33:39.240726  459056 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0510 19:33:39.242836  459056 out.go:235]   - Generating certificates and keys ...
	I0510 19:33:39.242931  459056 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0510 19:33:39.243010  459056 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0510 19:33:39.243103  459056 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0510 19:33:39.243180  459056 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0510 19:33:39.243286  459056 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0510 19:33:39.243366  459056 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0510 19:33:39.243440  459056 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0510 19:33:39.243544  459056 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0510 19:33:39.243662  459056 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0510 19:33:39.243769  459056 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0510 19:33:39.243830  459056 kubeadm.go:310] [certs] Using the existing "sa" key
	I0510 19:33:39.243905  459056 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0510 19:33:39.243972  459056 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0510 19:33:39.244018  459056 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0510 19:33:39.244072  459056 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0510 19:33:39.244132  459056 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0510 19:33:39.244227  459056 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0510 19:33:39.244322  459056 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0510 19:33:39.244375  459056 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0510 19:33:39.244459  459056 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0510 19:33:39.246586  459056 out.go:235]   - Booting up control plane ...
	I0510 19:33:39.246698  459056 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0510 19:33:39.246800  459056 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0510 19:33:39.246872  459056 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0510 19:33:39.246943  459056 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0510 19:33:39.247151  459056 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0510 19:33:39.247198  459056 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0510 19:33:39.247270  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:33:39.247423  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:33:39.247478  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:33:39.247671  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:33:39.247748  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:33:39.247894  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:33:39.247981  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:33:39.248179  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:33:39.248247  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:33:39.248415  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:33:39.248423  459056 kubeadm.go:310] 
	I0510 19:33:39.248461  459056 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0510 19:33:39.248497  459056 kubeadm.go:310] 		timed out waiting for the condition
	I0510 19:33:39.248507  459056 kubeadm.go:310] 
	I0510 19:33:39.248540  459056 kubeadm.go:310] 	This error is likely caused by:
	I0510 19:33:39.248570  459056 kubeadm.go:310] 		- The kubelet is not running
	I0510 19:33:39.248664  459056 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0510 19:33:39.248671  459056 kubeadm.go:310] 
	I0510 19:33:39.248767  459056 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0510 19:33:39.248803  459056 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0510 19:33:39.248832  459056 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0510 19:33:39.248839  459056 kubeadm.go:310] 
	I0510 19:33:39.248927  459056 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0510 19:33:39.249007  459056 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0510 19:33:39.249014  459056 kubeadm.go:310] 
	I0510 19:33:39.249164  459056 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0510 19:33:39.249288  459056 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0510 19:33:39.249351  459056 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0510 19:33:39.249408  459056 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0510 19:33:39.249484  459056 kubeadm.go:310] 
	W0510 19:33:39.249624  459056 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0510 19:33:39.249703  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0510 19:33:39.710770  459056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0510 19:33:39.729461  459056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0510 19:33:39.741531  459056 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0510 19:33:39.741556  459056 kubeadm.go:157] found existing configuration files:
	
	I0510 19:33:39.741617  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0510 19:33:39.752271  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0510 19:33:39.752339  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0510 19:33:39.764450  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0510 19:33:39.775142  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0510 19:33:39.775203  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0510 19:33:39.787008  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0510 19:33:39.798070  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0510 19:33:39.798143  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0510 19:33:39.809980  459056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0510 19:33:39.821862  459056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0510 19:33:39.821930  459056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0510 19:33:39.833890  459056 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0510 19:33:40.070673  459056 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0510 19:35:36.029186  459056 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0510 19:35:36.029314  459056 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0510 19:35:36.032027  459056 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0510 19:35:36.032078  459056 kubeadm.go:310] [preflight] Running pre-flight checks
	I0510 19:35:36.032177  459056 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0510 19:35:36.032280  459056 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0510 19:35:36.032361  459056 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0510 19:35:36.032446  459056 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0510 19:35:36.034371  459056 out.go:235]   - Generating certificates and keys ...
	I0510 19:35:36.034447  459056 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0510 19:35:36.034498  459056 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0510 19:35:36.034563  459056 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0510 19:35:36.034612  459056 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0510 19:35:36.034675  459056 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0510 19:35:36.034778  459056 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0510 19:35:36.034874  459056 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0510 19:35:36.034977  459056 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0510 19:35:36.035054  459056 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0510 19:35:36.035126  459056 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0510 19:35:36.035158  459056 kubeadm.go:310] [certs] Using the existing "sa" key
	I0510 19:35:36.035206  459056 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0510 19:35:36.035286  459056 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0510 19:35:36.035370  459056 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0510 19:35:36.035434  459056 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0510 19:35:36.035501  459056 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0510 19:35:36.035658  459056 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0510 19:35:36.035738  459056 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0510 19:35:36.035795  459056 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0510 19:35:36.035884  459056 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0510 19:35:36.037686  459056 out.go:235]   - Booting up control plane ...
	I0510 19:35:36.037791  459056 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0510 19:35:36.037869  459056 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0510 19:35:36.037934  459056 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0510 19:35:36.038008  459056 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0510 19:35:36.038231  459056 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0510 19:35:36.038305  459056 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0510 19:35:36.038398  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:35:36.038630  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:35:36.038727  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:35:36.038913  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:35:36.038987  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:35:36.039203  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:35:36.039326  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:35:36.039577  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:35:36.039655  459056 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0510 19:35:36.039818  459056 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0510 19:35:36.039825  459056 kubeadm.go:310] 
	I0510 19:35:36.039859  459056 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0510 19:35:36.039904  459056 kubeadm.go:310] 		timed out waiting for the condition
	I0510 19:35:36.039919  459056 kubeadm.go:310] 
	I0510 19:35:36.039948  459056 kubeadm.go:310] 	This error is likely caused by:
	I0510 19:35:36.039978  459056 kubeadm.go:310] 		- The kubelet is not running
	I0510 19:35:36.040071  459056 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0510 19:35:36.040078  459056 kubeadm.go:310] 
	I0510 19:35:36.040179  459056 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0510 19:35:36.040209  459056 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0510 19:35:36.040237  459056 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0510 19:35:36.040244  459056 kubeadm.go:310] 
	I0510 19:35:36.040337  459056 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0510 19:35:36.040419  459056 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0510 19:35:36.040442  459056 kubeadm.go:310] 
	I0510 19:35:36.040555  459056 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0510 19:35:36.040655  459056 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0510 19:35:36.040766  459056 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0510 19:35:36.040836  459056 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0510 19:35:36.040862  459056 kubeadm.go:310] 
	I0510 19:35:36.040906  459056 kubeadm.go:394] duration metric: took 7m59.202425038s to StartCluster
	I0510 19:35:36.040958  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0510 19:35:36.041023  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0510 19:35:36.097650  459056 cri.go:89] found id: ""
	I0510 19:35:36.097683  459056 logs.go:282] 0 containers: []
	W0510 19:35:36.097698  459056 logs.go:284] No container was found matching "kube-apiserver"
	I0510 19:35:36.097708  459056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0510 19:35:36.097773  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0510 19:35:36.142587  459056 cri.go:89] found id: ""
	I0510 19:35:36.142619  459056 logs.go:282] 0 containers: []
	W0510 19:35:36.142627  459056 logs.go:284] No container was found matching "etcd"
	I0510 19:35:36.142633  459056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0510 19:35:36.142702  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0510 19:35:36.186330  459056 cri.go:89] found id: ""
	I0510 19:35:36.186361  459056 logs.go:282] 0 containers: []
	W0510 19:35:36.186370  459056 logs.go:284] No container was found matching "coredns"
	I0510 19:35:36.186376  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0510 19:35:36.186444  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0510 19:35:36.230965  459056 cri.go:89] found id: ""
	I0510 19:35:36.230994  459056 logs.go:282] 0 containers: []
	W0510 19:35:36.231001  459056 logs.go:284] No container was found matching "kube-scheduler"
	I0510 19:35:36.231007  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0510 19:35:36.231062  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0510 19:35:36.276491  459056 cri.go:89] found id: ""
	I0510 19:35:36.276520  459056 logs.go:282] 0 containers: []
	W0510 19:35:36.276528  459056 logs.go:284] No container was found matching "kube-proxy"
	I0510 19:35:36.276534  459056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0510 19:35:36.276598  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0510 19:35:36.321937  459056 cri.go:89] found id: ""
	I0510 19:35:36.321971  459056 logs.go:282] 0 containers: []
	W0510 19:35:36.321980  459056 logs.go:284] No container was found matching "kube-controller-manager"
	I0510 19:35:36.321987  459056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0510 19:35:36.322050  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0510 19:35:36.364757  459056 cri.go:89] found id: ""
	I0510 19:35:36.364797  459056 logs.go:282] 0 containers: []
	W0510 19:35:36.364809  459056 logs.go:284] No container was found matching "kindnet"
	I0510 19:35:36.364818  459056 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0510 19:35:36.364875  459056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0510 19:35:36.409488  459056 cri.go:89] found id: ""
	I0510 19:35:36.409523  459056 logs.go:282] 0 containers: []
	W0510 19:35:36.409532  459056 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0510 19:35:36.409546  459056 logs.go:123] Gathering logs for kubelet ...
	I0510 19:35:36.409561  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0510 19:35:36.462665  459056 logs.go:123] Gathering logs for dmesg ...
	I0510 19:35:36.462705  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0510 19:35:36.478560  459056 logs.go:123] Gathering logs for describe nodes ...
	I0510 19:35:36.478591  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0510 19:35:36.555871  459056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0510 19:35:36.555904  459056 logs.go:123] Gathering logs for CRI-O ...
	I0510 19:35:36.555922  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0510 19:35:36.674559  459056 logs.go:123] Gathering logs for container status ...
	I0510 19:35:36.674603  459056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0510 19:35:36.723413  459056 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0510 19:35:36.723488  459056 out.go:270] * 
	W0510 19:35:36.723574  459056 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0510 19:35:36.723589  459056 out.go:270] * 
	W0510 19:35:36.724458  459056 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0510 19:35:36.727493  459056 out.go:201] 
	W0510 19:35:36.728543  459056 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0510 19:35:36.728588  459056 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0510 19:35:36.728604  459056 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0510 19:35:36.729894  459056 out.go:201] 
	
	
	==> CRI-O <==
	May 10 19:50:50 old-k8s-version-089147 crio[815]: time="2025-05-10 19:50:50.060256537Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746906650060221906,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0ffd359f-97f2-41d0-8785-ed00c80d3462 name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:50:50 old-k8s-version-089147 crio[815]: time="2025-05-10 19:50:50.060962281Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f1517804-1010-4242-a899-1b057d65af05 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:50:50 old-k8s-version-089147 crio[815]: time="2025-05-10 19:50:50.061031771Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f1517804-1010-4242-a899-1b057d65af05 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:50:50 old-k8s-version-089147 crio[815]: time="2025-05-10 19:50:50.061079386Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f1517804-1010-4242-a899-1b057d65af05 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:50:50 old-k8s-version-089147 crio[815]: time="2025-05-10 19:50:50.104112960Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fa7e2321-1629-4f94-97ea-7cf74c81700b name=/runtime.v1.RuntimeService/Version
	May 10 19:50:50 old-k8s-version-089147 crio[815]: time="2025-05-10 19:50:50.104251839Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fa7e2321-1629-4f94-97ea-7cf74c81700b name=/runtime.v1.RuntimeService/Version
	May 10 19:50:50 old-k8s-version-089147 crio[815]: time="2025-05-10 19:50:50.105754288Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3d579c4d-e1e2-4668-a29e-c90c28a95095 name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:50:50 old-k8s-version-089147 crio[815]: time="2025-05-10 19:50:50.106238333Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746906650106206109,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3d579c4d-e1e2-4668-a29e-c90c28a95095 name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:50:50 old-k8s-version-089147 crio[815]: time="2025-05-10 19:50:50.106890199Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4eee3f5b-29bc-4ca1-91ae-d9837888eb5b name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:50:50 old-k8s-version-089147 crio[815]: time="2025-05-10 19:50:50.106949307Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4eee3f5b-29bc-4ca1-91ae-d9837888eb5b name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:50:50 old-k8s-version-089147 crio[815]: time="2025-05-10 19:50:50.106981862Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4eee3f5b-29bc-4ca1-91ae-d9837888eb5b name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:50:50 old-k8s-version-089147 crio[815]: time="2025-05-10 19:50:50.143012010Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=75b98565-58f6-4a28-a8e6-583962d601f3 name=/runtime.v1.RuntimeService/Version
	May 10 19:50:50 old-k8s-version-089147 crio[815]: time="2025-05-10 19:50:50.143095118Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=75b98565-58f6-4a28-a8e6-583962d601f3 name=/runtime.v1.RuntimeService/Version
	May 10 19:50:50 old-k8s-version-089147 crio[815]: time="2025-05-10 19:50:50.144962601Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bbbcdcf6-ff01-4a81-8f6c-49162cc213d6 name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:50:50 old-k8s-version-089147 crio[815]: time="2025-05-10 19:50:50.145476014Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746906650145442824,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bbbcdcf6-ff01-4a81-8f6c-49162cc213d6 name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:50:50 old-k8s-version-089147 crio[815]: time="2025-05-10 19:50:50.146340578Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f5ee0d5e-6ebb-4c00-9e7a-f9e5e5fae120 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:50:50 old-k8s-version-089147 crio[815]: time="2025-05-10 19:50:50.146440338Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f5ee0d5e-6ebb-4c00-9e7a-f9e5e5fae120 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:50:50 old-k8s-version-089147 crio[815]: time="2025-05-10 19:50:50.146481008Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f5ee0d5e-6ebb-4c00-9e7a-f9e5e5fae120 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:50:50 old-k8s-version-089147 crio[815]: time="2025-05-10 19:50:50.181136946Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b0e5f1ad-d238-4633-8a15-05a2f2f01df9 name=/runtime.v1.RuntimeService/Version
	May 10 19:50:50 old-k8s-version-089147 crio[815]: time="2025-05-10 19:50:50.181305155Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b0e5f1ad-d238-4633-8a15-05a2f2f01df9 name=/runtime.v1.RuntimeService/Version
	May 10 19:50:50 old-k8s-version-089147 crio[815]: time="2025-05-10 19:50:50.183031320Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=acd319c8-e4fe-4382-b310-fac76686b960 name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:50:50 old-k8s-version-089147 crio[815]: time="2025-05-10 19:50:50.183730026Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746906650183706209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=acd319c8-e4fe-4382-b310-fac76686b960 name=/runtime.v1.ImageService/ImageFsInfo
	May 10 19:50:50 old-k8s-version-089147 crio[815]: time="2025-05-10 19:50:50.184342378Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=917285f5-cd9f-4a28-8e50-fc9a59772803 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:50:50 old-k8s-version-089147 crio[815]: time="2025-05-10 19:50:50.184406198Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=917285f5-cd9f-4a28-8e50-fc9a59772803 name=/runtime.v1.RuntimeService/ListContainers
	May 10 19:50:50 old-k8s-version-089147 crio[815]: time="2025-05-10 19:50:50.184448347Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=917285f5-cd9f-4a28-8e50-fc9a59772803 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[May10 19:27] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.000002] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.001401] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.001737] (rpcbind)[143]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.974355] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000007] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.102715] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.103174] kauditd_printk_skb: 74 callbacks suppressed
	[ +14.627732] kauditd_printk_skb: 46 callbacks suppressed
	[May10 19:33] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:50:50 up 23 min,  0 user,  load average: 0.05, 0.08, 0.08
	Linux old-k8s-version-089147 5.10.207 #1 SMP Fri May 9 03:49:24 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2024.11.2"
	
	
	==> kubelet <==
	May 10 19:50:46 old-k8s-version-089147 kubelet[8729]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	May 10 19:50:46 old-k8s-version-089147 kubelet[8729]: created by net/http.(*Transport).queueForDial
	May 10 19:50:46 old-k8s-version-089147 kubelet[8729]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	May 10 19:50:46 old-k8s-version-089147 kubelet[8729]: goroutine 146 [runnable]:
	May 10 19:50:46 old-k8s-version-089147 kubelet[8729]: net.(*conf).hostLookupOrder(0x6fb92c0, 0x70c5740, 0xc00093cc00, 0x1f, 0x0)
	May 10 19:50:46 old-k8s-version-089147 kubelet[8729]:         /usr/local/go/src/net/conf.go:203 +0x864
	May 10 19:50:46 old-k8s-version-089147 kubelet[8729]: net.(*Resolver).lookupIP(0x70c5740, 0x4f7fdc0, 0xc00049d480, 0x48ab5d6, 0x3, 0xc00093cc00, 0x1f, 0x0, 0x4a707e8, 0x0, ...)
	May 10 19:50:46 old-k8s-version-089147 kubelet[8729]:         /usr/local/go/src/net/lookup_unix.go:94 +0x86
	May 10 19:50:46 old-k8s-version-089147 kubelet[8729]: net.glob..func1(0x4f7fdc0, 0xc00049d480, 0xc00097d020, 0x48ab5d6, 0x3, 0xc00093cc00, 0x1f, 0xc000120018, 0x0, 0xc000c00180, ...)
	May 10 19:50:46 old-k8s-version-089147 kubelet[8729]:         /usr/local/go/src/net/hook.go:23 +0x72
	May 10 19:50:46 old-k8s-version-089147 kubelet[8729]: net.(*Resolver).lookupIPAddr.func1(0x0, 0x0, 0x0, 0x0)
	May 10 19:50:46 old-k8s-version-089147 kubelet[8729]:         /usr/local/go/src/net/lookup.go:293 +0xb9
	May 10 19:50:46 old-k8s-version-089147 kubelet[8729]: internal/singleflight.(*Group).doCall(0x70c5750, 0xc000c47310, 0xc00093cc30, 0x23, 0xc00049d4c0)
	May 10 19:50:46 old-k8s-version-089147 kubelet[8729]:         /usr/local/go/src/internal/singleflight/singleflight.go:95 +0x2e
	May 10 19:50:46 old-k8s-version-089147 kubelet[8729]: created by internal/singleflight.(*Group).DoChan
	May 10 19:50:46 old-k8s-version-089147 kubelet[8729]:         /usr/local/go/src/internal/singleflight/singleflight.go:88 +0x2cc
	May 10 19:50:46 old-k8s-version-089147 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	May 10 19:50:46 old-k8s-version-089147 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	May 10 19:50:47 old-k8s-version-089147 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 178.
	May 10 19:50:47 old-k8s-version-089147 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	May 10 19:50:47 old-k8s-version-089147 kubelet[8739]: I0510 19:50:47.355533    8739 server.go:416] Version: v1.20.0
	May 10 19:50:47 old-k8s-version-089147 kubelet[8739]: I0510 19:50:47.355812    8739 server.go:837] Client rotation is on, will bootstrap in background
	May 10 19:50:47 old-k8s-version-089147 kubelet[8739]: I0510 19:50:47.357869    8739 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	May 10 19:50:47 old-k8s-version-089147 kubelet[8739]: W0510 19:50:47.358848    8739 manager.go:159] Cannot detect current cgroup on cgroup v2
	May 10 19:50:47 old-k8s-version-089147 kubelet[8739]: I0510 19:50:47.359067    8739 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-089147 -n old-k8s-version-089147
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-089147 -n old-k8s-version-089147: exit status 2 (250.41506ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-089147" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (369.71s)

                                                
                                    

Test pass (260/321)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.2
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.15
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.33.0/json-events 4.02
13 TestDownloadOnly/v1.33.0/preload-exists 0
17 TestDownloadOnly/v1.33.0/LogsDuration 0.07
18 TestDownloadOnly/v1.33.0/DeleteAll 0.14
19 TestDownloadOnly/v1.33.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.63
22 TestOffline 95.38
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 137.52
31 TestAddons/serial/GCPAuth/Namespaces 0.15
32 TestAddons/serial/GCPAuth/FakeCredentials 9.55
35 TestAddons/parallel/Registry 19.42
37 TestAddons/parallel/InspektorGadget 10.91
38 TestAddons/parallel/MetricsServer 6.8
40 TestAddons/parallel/CSI 47.36
41 TestAddons/parallel/Headlamp 18.84
42 TestAddons/parallel/CloudSpanner 5.81
43 TestAddons/parallel/LocalPath 57.05
44 TestAddons/parallel/NvidiaDevicePlugin 6.3
45 TestAddons/parallel/Yakd 12.19
47 TestAddons/StoppedEnableDisable 91.33
48 TestCertOptions 71.41
49 TestCertExpiration 262.46
51 TestForceSystemdFlag 68.89
52 TestForceSystemdEnv 78.26
54 TestKVMDriverInstallOrUpdate 1.29
58 TestErrorSpam/setup 50.94
59 TestErrorSpam/start 0.38
60 TestErrorSpam/status 0.85
61 TestErrorSpam/pause 1.94
62 TestErrorSpam/unpause 2.02
63 TestErrorSpam/stop 4.68
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 91.26
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 37.37
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.12
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.49
75 TestFunctional/serial/CacheCmd/cache/add_local 1.19
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.81
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
85 TestFunctional/serial/LogsCmd 1.5
86 TestFunctional/serial/LogsFileCmd 1.51
87 TestFunctional/serial/InvalidService 3.62
89 TestFunctional/parallel/ConfigCmd 0.37
91 TestFunctional/parallel/DryRun 0.29
92 TestFunctional/parallel/InternationalLanguage 0.14
93 TestFunctional/parallel/StatusCmd 0.84
98 TestFunctional/parallel/AddonsCmd 0.14
101 TestFunctional/parallel/SSHCmd 0.41
102 TestFunctional/parallel/CpCmd 1.55
104 TestFunctional/parallel/FileSync 0.24
105 TestFunctional/parallel/CertSync 1.56
109 TestFunctional/parallel/NodeLabels 0.09
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.52
113 TestFunctional/parallel/License 0.2
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
118 TestFunctional/parallel/ImageCommands/ImageBuild 3.55
119 TestFunctional/parallel/ImageCommands/Setup 0.42
120 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.8
121 TestFunctional/parallel/Version/short 0.05
122 TestFunctional/parallel/Version/components 0.5
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.94
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.05
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.54
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.96
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.61
142 TestFunctional/parallel/ProfileCmd/profile_not_create 0.36
143 TestFunctional/parallel/ProfileCmd/profile_list 0.33
144 TestFunctional/parallel/ProfileCmd/profile_json_output 0.34
146 TestFunctional/parallel/MountCmd/specific-port 1.78
147 TestFunctional/parallel/MountCmd/VerifyCleanup 1.55
148 TestFunctional/parallel/ServiceCmd/List 1.28
149 TestFunctional/parallel/ServiceCmd/JSONOutput 1.28
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 229.51
161 TestMultiControlPlane/serial/DeployApp 6.45
162 TestMultiControlPlane/serial/PingHostFromPods 1.29
163 TestMultiControlPlane/serial/AddWorkerNode 51.35
164 TestMultiControlPlane/serial/NodeLabels 0.07
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.96
166 TestMultiControlPlane/serial/CopyFile 13.98
167 TestMultiControlPlane/serial/StopSecondaryNode 91.75
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.72
169 TestMultiControlPlane/serial/RestartSecondaryNode 36.27
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.05
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 411.62
172 TestMultiControlPlane/serial/DeleteSecondaryNode 19.57
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.69
174 TestMultiControlPlane/serial/StopCluster 273.07
175 TestMultiControlPlane/serial/RestartCluster 126.79
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.69
177 TestMultiControlPlane/serial/AddSecondaryNode 114.66
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.96
182 TestJSONOutput/start/Command 88.11
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.81
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.75
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 7.35
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.21
210 TestMainNoArgs 0.05
211 TestMinikubeProfile 99.47
214 TestMountStart/serial/StartWithMountFirst 32.08
215 TestMountStart/serial/VerifyMountFirst 0.41
216 TestMountStart/serial/StartWithMountSecond 27.16
217 TestMountStart/serial/VerifyMountSecond 0.4
218 TestMountStart/serial/DeleteFirst 0.9
219 TestMountStart/serial/VerifyMountPostDelete 0.41
220 TestMountStart/serial/Stop 1.43
221 TestMountStart/serial/RestartStopped 24.27
222 TestMountStart/serial/VerifyMountPostStop 0.43
225 TestMultiNode/serial/FreshStart2Nodes 114.78
226 TestMultiNode/serial/DeployApp2Nodes 5.38
227 TestMultiNode/serial/PingHostFrom2Pods 0.83
228 TestMultiNode/serial/AddNode 48.16
229 TestMultiNode/serial/MultiNodeLabels 0.06
230 TestMultiNode/serial/ProfileList 0.63
231 TestMultiNode/serial/CopyFile 7.76
232 TestMultiNode/serial/StopNode 3.21
233 TestMultiNode/serial/StartAfterStop 38.74
234 TestMultiNode/serial/RestartKeepsNodes 327.97
235 TestMultiNode/serial/DeleteNode 2.96
236 TestMultiNode/serial/StopMultiNode 181.97
237 TestMultiNode/serial/RestartMultiNode 91.02
238 TestMultiNode/serial/ValidateNameConflict 48.37
245 TestScheduledStopUnix 115.54
249 TestRunningBinaryUpgrade 200.46
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
258 TestNoKubernetes/serial/StartWithK8s 75.4
263 TestNetworkPlugins/group/false 4.14
267 TestNoKubernetes/serial/StartWithStopK8s 39.06
276 TestPause/serial/Start 105.4
277 TestNoKubernetes/serial/Start 51.48
278 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
279 TestNoKubernetes/serial/ProfileList 31.06
280 TestNoKubernetes/serial/Stop 1.47
281 TestNoKubernetes/serial/StartNoArgs 22.5
283 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
284 TestStoppedBinaryUpgrade/Setup 0.38
285 TestStoppedBinaryUpgrade/Upgrade 131.32
286 TestNetworkPlugins/group/auto/Start 113.74
287 TestStoppedBinaryUpgrade/MinikubeLogs 1.04
288 TestNetworkPlugins/group/kindnet/Start 76.33
289 TestNetworkPlugins/group/calico/Start 75.47
290 TestNetworkPlugins/group/auto/KubeletFlags 0.22
291 TestNetworkPlugins/group/auto/NetCatPod 11.26
292 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
293 TestNetworkPlugins/group/auto/DNS 0.17
294 TestNetworkPlugins/group/auto/Localhost 0.16
295 TestNetworkPlugins/group/auto/HairPin 0.15
296 TestNetworkPlugins/group/kindnet/KubeletFlags 0.36
297 TestNetworkPlugins/group/kindnet/NetCatPod 12.39
298 TestNetworkPlugins/group/custom-flannel/Start 84.78
299 TestNetworkPlugins/group/kindnet/DNS 0.18
300 TestNetworkPlugins/group/kindnet/Localhost 0.17
301 TestNetworkPlugins/group/kindnet/HairPin 0.18
302 TestNetworkPlugins/group/enable-default-cni/Start 110.64
303 TestNetworkPlugins/group/flannel/Start 115.44
304 TestNetworkPlugins/group/calico/ControllerPod 6.01
305 TestNetworkPlugins/group/calico/KubeletFlags 0.25
306 TestNetworkPlugins/group/calico/NetCatPod 12.35
307 TestNetworkPlugins/group/calico/DNS 0.17
308 TestNetworkPlugins/group/calico/Localhost 0.14
309 TestNetworkPlugins/group/calico/HairPin 0.13
310 TestNetworkPlugins/group/bridge/Start 90.8
311 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
312 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.28
313 TestNetworkPlugins/group/custom-flannel/DNS 0.17
314 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
315 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
318 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
319 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.3
320 TestNetworkPlugins/group/flannel/ControllerPod 6.01
321 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
322 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
323 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
324 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
325 TestNetworkPlugins/group/flannel/NetCatPod 11.25
326 TestNetworkPlugins/group/flannel/DNS 0.22
327 TestNetworkPlugins/group/flannel/Localhost 0.28
328 TestNetworkPlugins/group/flannel/HairPin 0.17
330 TestStartStop/group/no-preload/serial/FirstStart 111.03
331 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
332 TestNetworkPlugins/group/bridge/NetCatPod 10.3
333 TestNetworkPlugins/group/bridge/DNS 0.21
334 TestNetworkPlugins/group/bridge/Localhost 0.17
335 TestNetworkPlugins/group/bridge/HairPin 0.18
337 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 103.94
339 TestStartStop/group/newest-cni/serial/FirstStart 78.66
340 TestStartStop/group/newest-cni/serial/DeployApp 0
341 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.14
342 TestStartStop/group/newest-cni/serial/Stop 10.58
343 TestStartStop/group/no-preload/serial/DeployApp 11.34
344 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.32
345 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
346 TestStartStop/group/newest-cni/serial/SecondStart 38.43
347 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.23
348 TestStartStop/group/no-preload/serial/Stop 91.06
349 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.24
350 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.12
351 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
352 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
353 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
354 TestStartStop/group/newest-cni/serial/Pause 2.61
356 TestStartStop/group/embed-certs/serial/FirstStart 88.25
357 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
358 TestStartStop/group/no-preload/serial/SecondStart 64.95
359 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
360 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 63.8
363 TestStartStop/group/embed-certs/serial/DeployApp 9.36
364 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.44
365 TestStartStop/group/embed-certs/serial/Stop 91.05
366 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 7.01
367 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
368 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
369 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
370 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
371 TestStartStop/group/no-preload/serial/Pause 2.93
372 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
373 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.81
374 TestStartStop/group/old-k8s-version/serial/Stop 5.31
375 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
377 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
378 TestStartStop/group/embed-certs/serial/SecondStart 55.3
379 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
380 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
381 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
382 TestStartStop/group/embed-certs/serial/Pause 2.86
x
+
TestDownloadOnly/v1.20.0/json-events (8.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-820244 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-820244 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (8.200465669s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0510 17:52:13.914674  395980 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0510 17:52:13.914804  395980 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-820244
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-820244: exit status 85 (66.238939ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-820244 | jenkins | v1.35.0 | 10 May 25 17:52 UTC |          |
	|         | -p download-only-820244        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/05/10 17:52:05
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0510 17:52:05.756998  395992 out.go:345] Setting OutFile to fd 1 ...
	I0510 17:52:05.757239  395992 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:52:05.757247  395992 out.go:358] Setting ErrFile to fd 2...
	I0510 17:52:05.757252  395992 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:52:05.757441  395992 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-388787/.minikube/bin
	W0510 17:52:05.757551  395992 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20720-388787/.minikube/config/config.json: open /home/jenkins/minikube-integration/20720-388787/.minikube/config/config.json: no such file or directory
	I0510 17:52:05.758110  395992 out.go:352] Setting JSON to true
	I0510 17:52:05.759025  395992 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27274,"bootTime":1746872252,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 17:52:05.759132  395992 start.go:140] virtualization: kvm guest
	I0510 17:52:05.761673  395992 out.go:97] [download-only-820244] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0510 17:52:05.761803  395992 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball: no such file or directory
	I0510 17:52:05.761842  395992 notify.go:220] Checking for updates...
	I0510 17:52:05.763215  395992 out.go:169] MINIKUBE_LOCATION=20720
	I0510 17:52:05.764488  395992 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 17:52:05.765629  395992 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20720-388787/kubeconfig
	I0510 17:52:05.766685  395992 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-388787/.minikube
	I0510 17:52:05.768001  395992 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0510 17:52:05.770454  395992 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0510 17:52:05.770718  395992 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 17:52:05.807470  395992 out.go:97] Using the kvm2 driver based on user configuration
	I0510 17:52:05.807529  395992 start.go:304] selected driver: kvm2
	I0510 17:52:05.807539  395992 start.go:908] validating driver "kvm2" against <nil>
	I0510 17:52:05.807935  395992 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 17:52:05.808028  395992 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20720-388787/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0510 17:52:05.824011  395992 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0510 17:52:05.824063  395992 start_flags.go:311] no existing cluster config was found, will generate one from the flags 
	I0510 17:52:05.824584  395992 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0510 17:52:05.824742  395992 start_flags.go:957] Wait components to verify : map[apiserver:true system_pods:true]
	I0510 17:52:05.824772  395992 cni.go:84] Creating CNI manager for ""
	I0510 17:52:05.824825  395992 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0510 17:52:05.824835  395992 start_flags.go:320] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0510 17:52:05.824893  395992 start.go:347] cluster config:
	{Name:download-only-820244 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-820244 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:52:05.825088  395992 iso.go:125] acquiring lock: {Name:mk19640015999219180c6685480547adf0c02201 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 17:52:05.827068  395992 out.go:97] Downloading VM boot image ...
	I0510 17:52:05.827100  395992 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20720-388787/.minikube/cache/iso/amd64/minikube-v1.35.0-1746739450-20720-amd64.iso
	I0510 17:52:08.526296  395992 out.go:97] Starting "download-only-820244" primary control-plane node in "download-only-820244" cluster
	I0510 17:52:08.526333  395992 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0510 17:52:08.550551  395992 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0510 17:52:08.550590  395992 cache.go:56] Caching tarball of preloaded images
	I0510 17:52:08.550775  395992 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0510 17:52:08.552624  395992 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0510 17:52:08.552652  395992 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0510 17:52:08.582705  395992 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-820244 host does not exist
	  To start a cluster, run: "minikube start -p download-only-820244"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-820244
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.0/json-events (4.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-783034 --force --alsologtostderr --kubernetes-version=v1.33.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-783034 --force --alsologtostderr --kubernetes-version=v1.33.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.02295s)
--- PASS: TestDownloadOnly/v1.33.0/json-events (4.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.0/preload-exists
I0510 17:52:18.297139  395980 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
I0510 17:52:18.297187  395980 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20720-388787/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.33.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-783034
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-783034: exit status 85 (64.767104ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-820244 | jenkins | v1.35.0 | 10 May 25 17:52 UTC |                     |
	|         | -p download-only-820244        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	| delete  | -p download-only-820244        | download-only-820244 | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	| start   | -o=json --download-only        | download-only-783034 | jenkins | v1.35.0 | 10 May 25 17:52 UTC |                     |
	|         | -p download-only-783034        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/05/10 17:52:14
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0510 17:52:14.318404  396192 out.go:345] Setting OutFile to fd 1 ...
	I0510 17:52:14.318644  396192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:52:14.318652  396192 out.go:358] Setting ErrFile to fd 2...
	I0510 17:52:14.318657  396192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:52:14.318829  396192 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-388787/.minikube/bin
	I0510 17:52:14.319462  396192 out.go:352] Setting JSON to true
	I0510 17:52:14.320375  396192 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27282,"bootTime":1746872252,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 17:52:14.320486  396192 start.go:140] virtualization: kvm guest
	I0510 17:52:14.323078  396192 out.go:97] [download-only-783034] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 17:52:14.323278  396192 notify.go:220] Checking for updates...
	I0510 17:52:14.324536  396192 out.go:169] MINIKUBE_LOCATION=20720
	I0510 17:52:14.325902  396192 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 17:52:14.327150  396192 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20720-388787/kubeconfig
	I0510 17:52:14.328424  396192 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-388787/.minikube
	I0510 17:52:14.329589  396192 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-783034 host does not exist
	  To start a cluster, run: "minikube start -p download-only-783034"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.33.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.33.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-783034
--- PASS: TestDownloadOnly/v1.33.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
I0510 17:52:18.912369  395980 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.33.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.33.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-848338 --alsologtostderr --binary-mirror http://127.0.0.1:38303 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-848338" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-848338
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestOffline (95.38s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-031624 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-031624 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m34.522436131s)
helpers_test.go:175: Cleaning up "offline-crio-031624" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-031624
--- PASS: TestOffline (95.38s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-573653
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-573653: exit status 85 (53.843391ms)

                                                
                                                
-- stdout --
	* Profile "addons-573653" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-573653"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-573653
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-573653: exit status 85 (52.64681ms)

                                                
                                                
-- stdout --
	* Profile "addons-573653" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-573653"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (137.52s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-573653 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-573653 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m17.520621322s)
--- PASS: TestAddons/Setup (137.52s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-573653 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-573653 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.55s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-573653 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-573653 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d50a2768-dbe5-442b-b3a0-5dc397a99a69] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d50a2768-dbe5-442b-b3a0-5dc397a99a69] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.007282819s
addons_test.go:633: (dbg) Run:  kubectl --context addons-573653 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-573653 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-573653 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.55s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 2.83279ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-694bd45846-w5zm2" [2792955b-c0bc-4f02-93dd-2d7bb14fb09b] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.439099496s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-wkzrw" [8d763450-a4fa-4fe8-8481-43644617e2bb] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003625528s
addons_test.go:331: (dbg) Run:  kubectl --context addons-573653 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-573653 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-573653 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.691102465s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-573653 ip
2025/05/10 17:55:14 [DEBUG] GET http://192.168.39.219:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-573653 addons disable registry --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-573653 addons disable registry --alsologtostderr -v=1: (1.094228843s)
--- PASS: TestAddons/parallel/Registry (19.42s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.91s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-rrw88" [9978d650-c75b-45a5-8680-4b523fa6dd46] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003728816s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-573653 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-573653 addons disable inspektor-gadget --alsologtostderr -v=1: (5.906445983s)
--- PASS: TestAddons/parallel/InspektorGadget (10.91s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.8s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 2.937282ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-4svvf" [e6310a8d-fbdc-463e-af85-5ad9c1e1cf86] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.439992502s
addons_test.go:402: (dbg) Run:  kubectl --context addons-573653 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-573653 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-573653 addons disable metrics-server --alsologtostderr -v=1: (1.277662718s)
--- PASS: TestAddons/parallel/MetricsServer (6.80s)

                                                
                                    
x
+
TestAddons/parallel/CSI (47.36s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0510 17:55:02.101022  395980 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0510 17:55:02.109937  395980 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0510 17:55:02.109978  395980 kapi.go:107] duration metric: took 8.975486ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 8.990341ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-573653 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-573653 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-573653 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-573653 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-573653 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-573653 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-573653 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-573653 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [aba71329-d02a-449d-9dec-a7c9c956538e] Pending
helpers_test.go:344: "task-pv-pod" [aba71329-d02a-449d-9dec-a7c9c956538e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [aba71329-d02a-449d-9dec-a7c9c956538e] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.007443409s
addons_test.go:511: (dbg) Run:  kubectl --context addons-573653 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-573653 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-573653 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-573653 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-573653 delete pod task-pv-pod: (1.255808383s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-573653 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-573653 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-573653 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-573653 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-573653 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-573653 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-573653 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-573653 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-573653 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-573653 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-573653 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-573653 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-573653 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-573653 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [71f60ca1-51bb-461e-89b3-a6bfdf0b5af0] Pending
helpers_test.go:344: "task-pv-pod-restore" [71f60ca1-51bb-461e-89b3-a6bfdf0b5af0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [71f60ca1-51bb-461e-89b3-a6bfdf0b5af0] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005014691s
addons_test.go:553: (dbg) Run:  kubectl --context addons-573653 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-573653 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-573653 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-573653 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-573653 addons disable volumesnapshots --alsologtostderr -v=1: (1.045304431s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-573653 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-573653 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.047714448s)
--- PASS: TestAddons/parallel/CSI (47.36s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-573653 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-573653 --alsologtostderr -v=1: (1.040725828s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-qqhqw" [1a3f8d45-75c3-48be-843b-ba204161d2c5] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-qqhqw" [1a3f8d45-75c3-48be-843b-ba204161d2c5] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004040472s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-573653 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-573653 addons disable headlamp --alsologtostderr -v=1: (6.798693202s)
--- PASS: TestAddons/parallel/Headlamp (18.84s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.81s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-b85f6bbb8-fv6s7" [f5245c39-3297-4694-a82f-d71200deb856] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.008930062s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-573653 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.81s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.05s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-573653 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-573653 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-573653 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-573653 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-573653 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-573653 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-573653 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-573653 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-573653 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-573653 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-573653 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-573653 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [4afc141c-f1cd-4d2d-be9c-04686d3ae1f6] Pending
helpers_test.go:344: "test-local-path" [4afc141c-f1cd-4d2d-be9c-04686d3ae1f6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [4afc141c-f1cd-4d2d-be9c-04686d3ae1f6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [4afc141c-f1cd-4d2d-be9c-04686d3ae1f6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.005902538s
addons_test.go:906: (dbg) Run:  kubectl --context addons-573653 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-573653 ssh "cat /opt/local-path-provisioner/pvc-615a4158-a857-4e10-b582-9688b4023855_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-573653 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-573653 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-573653 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-573653 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.040816711s)
--- PASS: TestAddons/parallel/LocalPath (57.05s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.3s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-2qlw4" [3edbc82d-3c0a-4ac4-b43c-ac2363e24f12] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.443401871s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-573653 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.30s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-5bk6j" [2bd34928-f091-430b-bf4d-4191f8e45068] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.006823154s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-573653 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-573653 addons disable yakd --alsologtostderr -v=1: (6.178450663s)
--- PASS: TestAddons/parallel/Yakd (12.19s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.33s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-573653
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-573653: (1m31.012778992s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-573653
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-573653
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-573653
--- PASS: TestAddons/StoppedEnableDisable (91.33s)

                                                
                                    
x
+
TestCertOptions (71.41s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-178760 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-178760 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m9.85495219s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-178760 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-178760 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-178760 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-178760" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-178760
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-178760: (1.062616571s)
--- PASS: TestCertOptions (71.41s)

                                                
                                    
x
+
TestCertExpiration (262.46s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-355262 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-355262 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (54.950686371s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-355262 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-355262 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (26.370565413s)
helpers_test.go:175: Cleaning up "cert-expiration-355262" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-355262
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-355262: (1.139694995s)
--- PASS: TestCertExpiration (262.46s)

                                                
                                    
x
+
TestForceSystemdFlag (68.89s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-525854 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-525854 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m7.603058123s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-525854 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-525854" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-525854
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-525854: (1.058278638s)
--- PASS: TestForceSystemdFlag (68.89s)

                                                
                                    
x
+
TestForceSystemdEnv (78.26s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-429136 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0510 19:14:37.810237  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-429136 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m17.220209417s)
helpers_test.go:175: Cleaning up "force-systemd-env-429136" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-429136
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-429136: (1.035106455s)
--- PASS: TestForceSystemdEnv (78.26s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.29s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0510 19:14:01.212103  395980 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0510 19:14:01.212273  395980 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0510 19:14:01.245772  395980 install.go:62] docker-machine-driver-kvm2: exit status 1
W0510 19:14:01.246035  395980 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0510 19:14:01.246100  395980 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2503361469/001/docker-machine-driver-kvm2
I0510 19:14:01.371867  395980 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2503361469/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x557a960 0x557a960 0x557a960 0x557a960 0x557a960 0x557a960 0x557a960] Decompressors:map[bz2:0xc000013580 gz:0xc000013588 tar:0xc000013510 tar.bz2:0xc000013530 tar.gz:0xc000013540 tar.xz:0xc000013560 tar.zst:0xc000013570 tbz2:0xc000013530 tgz:0xc000013540 txz:0xc000013560 tzst:0xc000013570 xz:0xc000013590 zip:0xc0000135a0 zst:0xc000013598] Getters:map[file:0xc0008af2a0 http:0xc0005fa3c0 https:0xc0005fa410] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0510 19:14:01.371915  395980 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2503361469/001/docker-machine-driver-kvm2
I0510 19:14:01.988458  395980 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0510 19:14:01.988552  395980 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0510 19:14:02.022965  395980 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0510 19:14:02.023001  395980 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0510 19:14:02.023076  395980 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0510 19:14:02.023110  395980 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2503361469/002/docker-machine-driver-kvm2
I0510 19:14:02.047227  395980 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2503361469/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x557a960 0x557a960 0x557a960 0x557a960 0x557a960 0x557a960 0x557a960] Decompressors:map[bz2:0xc000013580 gz:0xc000013588 tar:0xc000013510 tar.bz2:0xc000013530 tar.gz:0xc000013540 tar.xz:0xc000013560 tar.zst:0xc000013570 tbz2:0xc000013530 tgz:0xc000013540 txz:0xc000013560 tzst:0xc000013570 xz:0xc000013590 zip:0xc0000135a0 zst:0xc000013598] Getters:map[file:0xc00099d840 http:0xc0006f8050 https:0xc0006f80a0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0510 19:14:02.047318  395980 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2503361469/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.29s)

                                                
                                    
x
+
TestErrorSpam/setup (50.94s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-878760 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-878760 --driver=kvm2  --container-runtime=crio
E0510 17:59:37.818508  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:59:37.824993  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:59:37.836470  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:59:37.858001  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:59:37.899487  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:59:37.981066  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:59:38.142718  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:59:38.464465  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:59:39.106698  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:59:40.388527  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:59:42.951637  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:59:48.073894  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:59:58.316182  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:00:18.797813  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-878760 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-878760 --driver=kvm2  --container-runtime=crio: (50.941504617s)
--- PASS: TestErrorSpam/setup (50.94s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-878760 --log_dir /tmp/nospam-878760 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-878760 --log_dir /tmp/nospam-878760 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-878760 --log_dir /tmp/nospam-878760 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.85s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-878760 --log_dir /tmp/nospam-878760 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-878760 --log_dir /tmp/nospam-878760 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-878760 --log_dir /tmp/nospam-878760 status
--- PASS: TestErrorSpam/status (0.85s)

                                                
                                    
x
+
TestErrorSpam/pause (1.94s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-878760 --log_dir /tmp/nospam-878760 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-878760 --log_dir /tmp/nospam-878760 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-878760 --log_dir /tmp/nospam-878760 pause
--- PASS: TestErrorSpam/pause (1.94s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.02s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-878760 --log_dir /tmp/nospam-878760 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-878760 --log_dir /tmp/nospam-878760 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-878760 --log_dir /tmp/nospam-878760 unpause
--- PASS: TestErrorSpam/unpause (2.02s)

                                                
                                    
x
+
TestErrorSpam/stop (4.68s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-878760 --log_dir /tmp/nospam-878760 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-878760 --log_dir /tmp/nospam-878760 stop: (2.372708289s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-878760 --log_dir /tmp/nospam-878760 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-878760 --log_dir /tmp/nospam-878760 stop: (1.17939506s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-878760 --log_dir /tmp/nospam-878760 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-878760 --log_dir /tmp/nospam-878760 stop: (1.125931459s)
--- PASS: TestErrorSpam/stop (4.68s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20720-388787/.minikube/files/etc/test/nested/copy/395980/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (91.26s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-581506 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0510 18:00:59.760045  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-581506 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m31.263336842s)
--- PASS: TestFunctional/serial/StartWithProxy (91.26s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.37s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0510 18:02:01.652293  395980 config.go:182] Loaded profile config "functional-581506": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-581506 --alsologtostderr -v=8
E0510 18:02:21.683066  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-581506 --alsologtostderr -v=8: (37.366903376s)
functional_test.go:680: soft start took 37.367728262s for "functional-581506" cluster.
I0510 18:02:39.019618  395980 config.go:182] Loaded profile config "functional-581506": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
--- PASS: TestFunctional/serial/SoftStart (37.37s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-581506 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-581506 cache add registry.k8s.io/pause:3.1: (1.117107215s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-581506 cache add registry.k8s.io/pause:3.3: (1.222475671s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-581506 cache add registry.k8s.io/pause:latest: (1.145664593s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-581506 /tmp/TestFunctionalserialCacheCmdcacheadd_local929093729/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 cache add minikube-local-cache-test:functional-581506
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 cache delete minikube-local-cache-test:functional-581506
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-581506
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.81s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-581506 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (231.910265ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 cache reload
functional_test.go:1175: (dbg) Done: out/minikube-linux-amd64 -p functional-581506 cache reload: (1.040233646s)
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.81s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 kubectl -- --context functional-581506 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-581506 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-amd64 -p functional-581506 logs: (1.502212752s)
--- PASS: TestFunctional/serial/LogsCmd (1.50s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 logs --file /tmp/TestFunctionalserialLogsFileCmd1364992162/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-amd64 -p functional-581506 logs --file /tmp/TestFunctionalserialLogsFileCmd1364992162/001/logs.txt: (1.506008037s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.62s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-581506 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-581506
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-581506: exit status 115 (293.456243ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.52:30933 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-581506 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.62s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-581506 config get cpus: exit status 14 (54.494457ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-581506 config get cpus: exit status 14 (66.468872ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-581506 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-581506 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (144.307184ms)

                                                
                                                
-- stdout --
	* [functional-581506] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20720
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20720-388787/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-388787/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0510 18:16:05.957906  407146 out.go:345] Setting OutFile to fd 1 ...
	I0510 18:16:05.958551  407146 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 18:16:05.958572  407146 out.go:358] Setting ErrFile to fd 2...
	I0510 18:16:05.958577  407146 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 18:16:05.958819  407146 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-388787/.minikube/bin
	I0510 18:16:05.959428  407146 out.go:352] Setting JSON to false
	I0510 18:16:05.960473  407146 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":28714,"bootTime":1746872252,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 18:16:05.960590  407146 start.go:140] virtualization: kvm guest
	I0510 18:16:05.963000  407146 out.go:177] * [functional-581506] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 18:16:05.964177  407146 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 18:16:05.964173  407146 notify.go:220] Checking for updates...
	I0510 18:16:05.966286  407146 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 18:16:05.967405  407146 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-388787/kubeconfig
	I0510 18:16:05.968648  407146 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-388787/.minikube
	I0510 18:16:05.970085  407146 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 18:16:05.971386  407146 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 18:16:05.973051  407146 config.go:182] Loaded profile config "functional-581506": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 18:16:05.973463  407146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 18:16:05.973531  407146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:16:05.991827  407146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45447
	I0510 18:16:05.992258  407146 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:16:05.992743  407146 main.go:141] libmachine: Using API Version  1
	I0510 18:16:05.992783  407146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:16:05.993227  407146 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:16:05.993401  407146 main.go:141] libmachine: (functional-581506) Calling .DriverName
	I0510 18:16:05.993712  407146 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 18:16:05.994062  407146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 18:16:05.994105  407146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:16:06.009838  407146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33125
	I0510 18:16:06.010309  407146 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:16:06.010769  407146 main.go:141] libmachine: Using API Version  1
	I0510 18:16:06.010793  407146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:16:06.011151  407146 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:16:06.011426  407146 main.go:141] libmachine: (functional-581506) Calling .DriverName
	I0510 18:16:06.047938  407146 out.go:177] * Using the kvm2 driver based on existing profile
	I0510 18:16:06.049288  407146 start.go:304] selected driver: kvm2
	I0510 18:16:06.049315  407146 start.go:908] validating driver "kvm2" against &{Name:functional-581506 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.33.0 ClusterName:functional-581506 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8441 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 18:16:06.049443  407146 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 18:16:06.051494  407146 out.go:201] 
	W0510 18:16:06.052765  407146 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0510 18:16:06.054092  407146 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-581506 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-581506 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-581506 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (140.424696ms)

                                                
                                                
-- stdout --
	* [functional-581506] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20720
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20720-388787/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-388787/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0510 18:16:06.249127  407218 out.go:345] Setting OutFile to fd 1 ...
	I0510 18:16:06.249230  407218 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 18:16:06.249245  407218 out.go:358] Setting ErrFile to fd 2...
	I0510 18:16:06.249249  407218 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 18:16:06.249538  407218 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-388787/.minikube/bin
	I0510 18:16:06.250056  407218 out.go:352] Setting JSON to false
	I0510 18:16:06.250986  407218 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":28714,"bootTime":1746872252,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 18:16:06.251048  407218 start.go:140] virtualization: kvm guest
	I0510 18:16:06.252905  407218 out.go:177] * [functional-581506] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0510 18:16:06.254379  407218 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 18:16:06.254378  407218 notify.go:220] Checking for updates...
	I0510 18:16:06.255877  407218 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 18:16:06.257250  407218 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-388787/kubeconfig
	I0510 18:16:06.258440  407218 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-388787/.minikube
	I0510 18:16:06.259843  407218 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 18:16:06.261024  407218 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 18:16:06.262455  407218 config.go:182] Loaded profile config "functional-581506": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 18:16:06.262923  407218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 18:16:06.263004  407218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:16:06.279063  407218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38601
	I0510 18:16:06.279680  407218 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:16:06.280465  407218 main.go:141] libmachine: Using API Version  1
	I0510 18:16:06.280504  407218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:16:06.280895  407218 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:16:06.281110  407218 main.go:141] libmachine: (functional-581506) Calling .DriverName
	I0510 18:16:06.281407  407218 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 18:16:06.281717  407218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 18:16:06.281756  407218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:16:06.297201  407218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44999
	I0510 18:16:06.297734  407218 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:16:06.298367  407218 main.go:141] libmachine: Using API Version  1
	I0510 18:16:06.298396  407218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:16:06.298758  407218 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:16:06.298967  407218 main.go:141] libmachine: (functional-581506) Calling .DriverName
	I0510 18:16:06.333465  407218 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0510 18:16:06.334614  407218 start.go:304] selected driver: kvm2
	I0510 18:16:06.334628  407218 start.go:908] validating driver "kvm2" against &{Name:functional-581506 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.33.0 ClusterName:functional-581506 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8441 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 18:16:06.334724  407218 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 18:16:06.336620  407218 out.go:201] 
	W0510 18:16:06.337727  407218 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0510 18:16:06.338871  407218 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 ssh -n functional-581506 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 cp functional-581506:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3586486842/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 ssh -n functional-581506 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 ssh -n functional-581506 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/395980/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 ssh "sudo cat /etc/test/nested/copy/395980/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/395980.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 ssh "sudo cat /etc/ssl/certs/395980.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/395980.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 ssh "sudo cat /usr/share/ca-certificates/395980.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3959802.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 ssh "sudo cat /etc/ssl/certs/3959802.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/3959802.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 ssh "sudo cat /usr/share/ca-certificates/3959802.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-581506 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-581506 ssh "sudo systemctl is-active docker": exit status 1 (248.832346ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 ssh "sudo systemctl is-active containerd"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-581506 ssh "sudo systemctl is-active containerd": exit status 1 (273.241343ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-581506 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.33.0
registry.k8s.io/kube-proxy:v1.33.0
registry.k8s.io/kube-controller-manager:v1.33.0
registry.k8s.io/kube-apiserver:v1.33.0
registry.k8s.io/etcd:3.5.21-0
registry.k8s.io/coredns/coredns:v1.12.0
localhost/minikube-local-cache-test:functional-581506
localhost/kicbase/echo-server:functional-581506
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kindest/kindnetd:v20250214-acbabc1a
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-581506 image ls --format short --alsologtostderr:
I0510 18:18:52.180312  408654 out.go:345] Setting OutFile to fd 1 ...
I0510 18:18:52.180603  408654 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0510 18:18:52.180617  408654 out.go:358] Setting ErrFile to fd 2...
I0510 18:18:52.180624  408654 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0510 18:18:52.180888  408654 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-388787/.minikube/bin
I0510 18:18:52.181609  408654 config.go:182] Loaded profile config "functional-581506": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
I0510 18:18:52.181791  408654 config.go:182] Loaded profile config "functional-581506": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
I0510 18:18:52.182300  408654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0510 18:18:52.182380  408654 main.go:141] libmachine: Launching plugin server for driver kvm2
I0510 18:18:52.198164  408654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34855
I0510 18:18:52.198709  408654 main.go:141] libmachine: () Calling .GetVersion
I0510 18:18:52.199303  408654 main.go:141] libmachine: Using API Version  1
I0510 18:18:52.199330  408654 main.go:141] libmachine: () Calling .SetConfigRaw
I0510 18:18:52.199668  408654 main.go:141] libmachine: () Calling .GetMachineName
I0510 18:18:52.199896  408654 main.go:141] libmachine: (functional-581506) Calling .GetState
I0510 18:18:52.202202  408654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0510 18:18:52.202260  408654 main.go:141] libmachine: Launching plugin server for driver kvm2
I0510 18:18:52.219901  408654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34795
I0510 18:18:52.220398  408654 main.go:141] libmachine: () Calling .GetVersion
I0510 18:18:52.220965  408654 main.go:141] libmachine: Using API Version  1
I0510 18:18:52.220997  408654 main.go:141] libmachine: () Calling .SetConfigRaw
I0510 18:18:52.221438  408654 main.go:141] libmachine: () Calling .GetMachineName
I0510 18:18:52.221672  408654 main.go:141] libmachine: (functional-581506) Calling .DriverName
I0510 18:18:52.221956  408654 ssh_runner.go:195] Run: systemctl --version
I0510 18:18:52.221992  408654 main.go:141] libmachine: (functional-581506) Calling .GetSSHHostname
I0510 18:18:52.225537  408654 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
I0510 18:18:52.226028  408654 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
I0510 18:18:52.226060  408654 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
I0510 18:18:52.226208  408654 main.go:141] libmachine: (functional-581506) Calling .GetSSHPort
I0510 18:18:52.226399  408654 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
I0510 18:18:52.226537  408654 main.go:141] libmachine: (functional-581506) Calling .GetSSHUsername
I0510 18:18:52.226659  408654 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/functional-581506/id_rsa Username:docker}
I0510 18:18:52.316410  408654 ssh_runner.go:195] Run: sudo crictl images --output json
I0510 18:18:52.380872  408654 main.go:141] libmachine: Making call to close driver server
I0510 18:18:52.380889  408654 main.go:141] libmachine: (functional-581506) Calling .Close
I0510 18:18:52.381218  408654 main.go:141] libmachine: Successfully made call to close driver server
I0510 18:18:52.381241  408654 main.go:141] libmachine: Making call to close connection to plugin binary
I0510 18:18:52.381252  408654 main.go:141] libmachine: Making call to close driver server
I0510 18:18:52.381262  408654 main.go:141] libmachine: (functional-581506) Calling .Close
I0510 18:18:52.381550  408654 main.go:141] libmachine: (functional-581506) DBG | Closing plugin on server side
I0510 18:18:52.381567  408654 main.go:141] libmachine: Successfully made call to close driver server
I0510 18:18:52.381581  408654 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-581506 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| localhost/kicbase/echo-server           | functional-581506  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/coredns/coredns         | v1.12.0            | 1cf5f116067c6 | 71.2MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-581506  | 0e0d4ec4d5d14 | 3.33kB |
| localhost/my-image                      | functional-581506  | 1be27181eeb41 | 1.47MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kindest/kindnetd              | v20250214-acbabc1a | df3849d954c98 | 95.7MB |
| registry.k8s.io/etcd                    | 3.5.21-0           | 499038711c081 | 154MB  |
| registry.k8s.io/kube-controller-manager | v1.33.0            | 1d579cb6d6967 | 95.7MB |
| registry.k8s.io/kube-scheduler          | v1.33.0            | 8d72586a76469 | 74.5MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/kube-apiserver          | v1.33.0            | 6ba9545b2183e | 103MB  |
| registry.k8s.io/kube-proxy              | v1.33.0            | f1184a0bd7fe5 | 99.1MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-581506 image ls --format table --alsologtostderr:
I0510 18:18:56.224911  409170 out.go:345] Setting OutFile to fd 1 ...
I0510 18:18:56.225192  409170 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0510 18:18:56.225203  409170 out.go:358] Setting ErrFile to fd 2...
I0510 18:18:56.225206  409170 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0510 18:18:56.225392  409170 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-388787/.minikube/bin
I0510 18:18:56.225950  409170 config.go:182] Loaded profile config "functional-581506": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
I0510 18:18:56.226044  409170 config.go:182] Loaded profile config "functional-581506": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
I0510 18:18:56.226419  409170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0510 18:18:56.226471  409170 main.go:141] libmachine: Launching plugin server for driver kvm2
I0510 18:18:56.243071  409170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40993
I0510 18:18:56.243552  409170 main.go:141] libmachine: () Calling .GetVersion
I0510 18:18:56.245908  409170 main.go:141] libmachine: Using API Version  1
I0510 18:18:56.245982  409170 main.go:141] libmachine: () Calling .SetConfigRaw
I0510 18:18:56.246550  409170 main.go:141] libmachine: () Calling .GetMachineName
I0510 18:18:56.246949  409170 main.go:141] libmachine: (functional-581506) Calling .GetState
I0510 18:18:56.249307  409170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0510 18:18:56.249360  409170 main.go:141] libmachine: Launching plugin server for driver kvm2
I0510 18:18:56.266049  409170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35513
I0510 18:18:56.266626  409170 main.go:141] libmachine: () Calling .GetVersion
I0510 18:18:56.267353  409170 main.go:141] libmachine: Using API Version  1
I0510 18:18:56.267381  409170 main.go:141] libmachine: () Calling .SetConfigRaw
I0510 18:18:56.267783  409170 main.go:141] libmachine: () Calling .GetMachineName
I0510 18:18:56.268041  409170 main.go:141] libmachine: (functional-581506) Calling .DriverName
I0510 18:18:56.268279  409170 ssh_runner.go:195] Run: systemctl --version
I0510 18:18:56.268315  409170 main.go:141] libmachine: (functional-581506) Calling .GetSSHHostname
I0510 18:18:56.270920  409170 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
I0510 18:18:56.271457  409170 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
I0510 18:18:56.271486  409170 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
I0510 18:18:56.271680  409170 main.go:141] libmachine: (functional-581506) Calling .GetSSHPort
I0510 18:18:56.271885  409170 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
I0510 18:18:56.272095  409170 main.go:141] libmachine: (functional-581506) Calling .GetSSHUsername
I0510 18:18:56.272269  409170 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/functional-581506/id_rsa Username:docker}
I0510 18:18:56.385951  409170 ssh_runner.go:195] Run: sudo crictl images --output json
I0510 18:18:56.448522  409170 main.go:141] libmachine: Making call to close driver server
I0510 18:18:56.448552  409170 main.go:141] libmachine: (functional-581506) Calling .Close
I0510 18:18:56.448903  409170 main.go:141] libmachine: Successfully made call to close driver server
I0510 18:18:56.448927  409170 main.go:141] libmachine: Making call to close connection to plugin binary
I0510 18:18:56.448939  409170 main.go:141] libmachine: Making call to close driver server
I0510 18:18:56.448944  409170 main.go:141] libmachine: (functional-581506) DBG | Closing plugin on server side
I0510 18:18:56.448948  409170 main.go:141] libmachine: (functional-581506) Calling .Close
I0510 18:18:56.449254  409170 main.go:141] libmachine: Successfully made call to close driver server
I0510 18:18:56.449275  409170 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-581506 image ls --format json --alsologtostderr:
[{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"0e0d4ec4d5d14ca69921cfd1b2093d0d8e26cefa42b9a37b63fe0c1391dd3bc2","repoDigests":["localhost/minikube-local-cache-test@sha256:723a2f921ac79321b709c6f8bced4af5feea37a40b1c2497830de10a50fb2c88"],"repoTags":["localhost/minikube-local-cache-test:functional-581506"],"size":"3330"},{"id":"1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b","repoDigests":["registry.k8s.io/coredns/coredns@sha256:2324f485c8db937628a18c293d946327f3a7229b9f77213e8f2256f0b616a4ee","registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97"],"repoTags":["registry.k8s.io/coredns/
coredns:v1.12.0"],"size":"71169915"},{"id":"499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1","repoDigests":["registry.k8s.io/etcd@sha256:21d2177d708b53ac0fbd1c073c334d58f913eb75da293ff086610e61af03630a","registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121"],"repoTags":["registry.k8s.io/etcd:3.5.21-0"],"size":"154190592"},{"id":"1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9de627a31852175b8308cb7c8d92f15365672f6bf26026719cc1c05a03580bc4","registry.k8s.io/kube-controller-manager@sha256:f0b32ab11fd06504608cdb9084f7284106b4f5f07f35eb8823e70ea0eaaf252a"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.33.0"],"size":"95653192"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{
"id":"df3849d954c98a7162c7bee7313ece357606e313d98ebd68b7aac5e961b1156f","repoDigests":["docker.io/kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495","docker.io/kindest/kindnetd@sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97"],"repoTags":["docker.io/kindest/kindnetd:v20250214-acbabc1a"],"size":"95703604"},{"id":"b6613321534d660b014e9996d9e197cb4d152bf271ba45d684d48d781477858c","repoDigests":["docker.io/library/501a7b9d5d3b20ddfdadaabb6821805a264db49dec8a499b393fb3582f33e766-tmp@sha256:c626c86f9b49102678074df4ffaa11d74d81298198ec84ee791b5b66413bf3b1"],"repoTags":[],"size":"1466018"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-581506"],"size":"4943877"},{"id":"6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4","repoDigests":["registry.
k8s.io/kube-apiserver@sha256:6679a9970a8b2f18647b33bf02e5e9895d286689256e2f7172481b4096e46a32","registry.k8s.io/kube-apiserver@sha256:6c0f4ade3e5a34d8791a48671b127a00dc114e84b70ec4d92e586c17d68a1ca6"],"repoTags":["registry.k8s.io/kube-apiserver:v1.33.0"],"size":"102858210"},{"id":"f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68","repoDigests":["registry.k8s.io/kube-proxy@sha256:05f8984642d05b1b1a6c37605a4a566e46e7290f9291d17885f096c36861095b","registry.k8s.io/kube-proxy@sha256:32b893c37d363b18711b397f6ccb29655e3d08183d410f1a93ad298992c9ea7e"],"repoTags":["registry.k8s.io/kube-proxy:v1.33.0"],"size":"99145113"},{"id":"8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4","repoDigests":["registry.k8s.io/kube-scheduler@sha256:8dd2fbeb7f711da53a89ded239e54133f34110d98de887a39a9021e651b51f1f","registry.k8s.io/kube-scheduler@sha256:b375b81c7f253be3f093232650b153288e7f90be3d02a025fd602b4b40fd95c5"],"repoTags":["registry.k8s.io/kube-scheduler:v1.33.0"],"size":"74501448"},{"id":"350b164e7a
e1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","regist
ry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"1be27181eeb41655ff50498e113b43cb443e525b2acc1a174b8902e29215277a","repoDigests":["localhost/my-image@sha256:0a0a65e09914176b1fb4e8db2ebba96f3a7a8bd37b63e0dda3d1fe362ff8b041"],"repoTags":["localhost/my-image:functional-581506"],"size":"1468600"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-581506 image ls --format json --alsologtostderr:
I0510 18:18:56.152434  409150 out.go:345] Setting OutFile to fd 1 ...
I0510 18:18:56.152566  409150 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0510 18:18:56.152578  409150 out.go:358] Setting ErrFile to fd 2...
I0510 18:18:56.152585  409150 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0510 18:18:56.152818  409150 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-388787/.minikube/bin
I0510 18:18:56.153392  409150 config.go:182] Loaded profile config "functional-581506": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
I0510 18:18:56.153517  409150 config.go:182] Loaded profile config "functional-581506": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
I0510 18:18:56.153954  409150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0510 18:18:56.154040  409150 main.go:141] libmachine: Launching plugin server for driver kvm2
I0510 18:18:56.171265  409150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43261
I0510 18:18:56.172808  409150 main.go:141] libmachine: () Calling .GetVersion
I0510 18:18:56.173731  409150 main.go:141] libmachine: Using API Version  1
I0510 18:18:56.173758  409150 main.go:141] libmachine: () Calling .SetConfigRaw
I0510 18:18:56.174139  409150 main.go:141] libmachine: () Calling .GetMachineName
I0510 18:18:56.174343  409150 main.go:141] libmachine: (functional-581506) Calling .GetState
I0510 18:18:56.176435  409150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0510 18:18:56.176481  409150 main.go:141] libmachine: Launching plugin server for driver kvm2
I0510 18:18:56.195760  409150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45655
I0510 18:18:56.196285  409150 main.go:141] libmachine: () Calling .GetVersion
I0510 18:18:56.196879  409150 main.go:141] libmachine: Using API Version  1
I0510 18:18:56.196909  409150 main.go:141] libmachine: () Calling .SetConfigRaw
I0510 18:18:56.198642  409150 main.go:141] libmachine: () Calling .GetMachineName
I0510 18:18:56.198892  409150 main.go:141] libmachine: (functional-581506) Calling .DriverName
I0510 18:18:56.199180  409150 ssh_runner.go:195] Run: systemctl --version
I0510 18:18:56.199209  409150 main.go:141] libmachine: (functional-581506) Calling .GetSSHHostname
I0510 18:18:56.202781  409150 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
I0510 18:18:56.203481  409150 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
I0510 18:18:56.203514  409150 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
I0510 18:18:56.203963  409150 main.go:141] libmachine: (functional-581506) Calling .GetSSHPort
I0510 18:18:56.204180  409150 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
I0510 18:18:56.204383  409150 main.go:141] libmachine: (functional-581506) Calling .GetSSHUsername
I0510 18:18:56.204567  409150 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/functional-581506/id_rsa Username:docker}
I0510 18:18:56.307286  409150 ssh_runner.go:195] Run: sudo crictl images --output json
I0510 18:18:56.371140  409150 main.go:141] libmachine: Making call to close driver server
I0510 18:18:56.371162  409150 main.go:141] libmachine: (functional-581506) Calling .Close
I0510 18:18:56.371509  409150 main.go:141] libmachine: Successfully made call to close driver server
I0510 18:18:56.371533  409150 main.go:141] libmachine: Making call to close connection to plugin binary
I0510 18:18:56.371528  409150 main.go:141] libmachine: (functional-581506) DBG | Closing plugin on server side
I0510 18:18:56.371550  409150 main.go:141] libmachine: Making call to close driver server
I0510 18:18:56.371559  409150 main.go:141] libmachine: (functional-581506) Calling .Close
I0510 18:18:56.371835  409150 main.go:141] libmachine: (functional-581506) DBG | Closing plugin on server side
I0510 18:18:56.371882  409150 main.go:141] libmachine: Successfully made call to close driver server
I0510 18:18:56.371894  409150 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-581506 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:6679a9970a8b2f18647b33bf02e5e9895d286689256e2f7172481b4096e46a32
- registry.k8s.io/kube-apiserver@sha256:6c0f4ade3e5a34d8791a48671b127a00dc114e84b70ec4d92e586c17d68a1ca6
repoTags:
- registry.k8s.io/kube-apiserver:v1.33.0
size: "102858210"
- id: 1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9de627a31852175b8308cb7c8d92f15365672f6bf26026719cc1c05a03580bc4
- registry.k8s.io/kube-controller-manager@sha256:f0b32ab11fd06504608cdb9084f7284106b4f5f07f35eb8823e70ea0eaaf252a
repoTags:
- registry.k8s.io/kube-controller-manager:v1.33.0
size: "95653192"
- id: f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68
repoDigests:
- registry.k8s.io/kube-proxy@sha256:05f8984642d05b1b1a6c37605a4a566e46e7290f9291d17885f096c36861095b
- registry.k8s.io/kube-proxy@sha256:32b893c37d363b18711b397f6ccb29655e3d08183d410f1a93ad298992c9ea7e
repoTags:
- registry.k8s.io/kube-proxy:v1.33.0
size: "99145113"
- id: 8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:8dd2fbeb7f711da53a89ded239e54133f34110d98de887a39a9021e651b51f1f
- registry.k8s.io/kube-scheduler@sha256:b375b81c7f253be3f093232650b153288e7f90be3d02a025fd602b4b40fd95c5
repoTags:
- registry.k8s.io/kube-scheduler:v1.33.0
size: "74501448"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: df3849d954c98a7162c7bee7313ece357606e313d98ebd68b7aac5e961b1156f
repoDigests:
- docker.io/kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495
- docker.io/kindest/kindnetd@sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97
repoTags:
- docker.io/kindest/kindnetd:v20250214-acbabc1a
size: "95703604"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-581506
size: "4943877"
- id: 0e0d4ec4d5d14ca69921cfd1b2093d0d8e26cefa42b9a37b63fe0c1391dd3bc2
repoDigests:
- localhost/minikube-local-cache-test@sha256:723a2f921ac79321b709c6f8bced4af5feea37a40b1c2497830de10a50fb2c88
repoTags:
- localhost/minikube-local-cache-test:functional-581506
size: "3330"
- id: 1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:2324f485c8db937628a18c293d946327f3a7229b9f77213e8f2256f0b616a4ee
- registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.0
size: "71169915"
- id: 499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1
repoDigests:
- registry.k8s.io/etcd@sha256:21d2177d708b53ac0fbd1c073c334d58f913eb75da293ff086610e61af03630a
- registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121
repoTags:
- registry.k8s.io/etcd:3.5.21-0
size: "154190592"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-581506 image ls --format yaml --alsologtostderr:
I0510 18:18:52.441035  408699 out.go:345] Setting OutFile to fd 1 ...
I0510 18:18:52.441302  408699 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0510 18:18:52.441314  408699 out.go:358] Setting ErrFile to fd 2...
I0510 18:18:52.441317  408699 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0510 18:18:52.441507  408699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-388787/.minikube/bin
I0510 18:18:52.442130  408699 config.go:182] Loaded profile config "functional-581506": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
I0510 18:18:52.442227  408699 config.go:182] Loaded profile config "functional-581506": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
I0510 18:18:52.442650  408699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0510 18:18:52.442715  408699 main.go:141] libmachine: Launching plugin server for driver kvm2
I0510 18:18:52.460749  408699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41141
I0510 18:18:52.461528  408699 main.go:141] libmachine: () Calling .GetVersion
I0510 18:18:52.462115  408699 main.go:141] libmachine: Using API Version  1
I0510 18:18:52.462146  408699 main.go:141] libmachine: () Calling .SetConfigRaw
I0510 18:18:52.462598  408699 main.go:141] libmachine: () Calling .GetMachineName
I0510 18:18:52.462768  408699 main.go:141] libmachine: (functional-581506) Calling .GetState
I0510 18:18:52.464898  408699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0510 18:18:52.464955  408699 main.go:141] libmachine: Launching plugin server for driver kvm2
I0510 18:18:52.482914  408699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42895
I0510 18:18:52.483494  408699 main.go:141] libmachine: () Calling .GetVersion
I0510 18:18:52.484135  408699 main.go:141] libmachine: Using API Version  1
I0510 18:18:52.484174  408699 main.go:141] libmachine: () Calling .SetConfigRaw
I0510 18:18:52.484582  408699 main.go:141] libmachine: () Calling .GetMachineName
I0510 18:18:52.484802  408699 main.go:141] libmachine: (functional-581506) Calling .DriverName
I0510 18:18:52.485041  408699 ssh_runner.go:195] Run: systemctl --version
I0510 18:18:52.485128  408699 main.go:141] libmachine: (functional-581506) Calling .GetSSHHostname
I0510 18:18:52.488866  408699 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
I0510 18:18:52.489311  408699 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
I0510 18:18:52.489343  408699 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
I0510 18:18:52.489520  408699 main.go:141] libmachine: (functional-581506) Calling .GetSSHPort
I0510 18:18:52.489721  408699 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
I0510 18:18:52.489848  408699 main.go:141] libmachine: (functional-581506) Calling .GetSSHUsername
I0510 18:18:52.489965  408699 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/functional-581506/id_rsa Username:docker}
I0510 18:18:52.580156  408699 ssh_runner.go:195] Run: sudo crictl images --output json
I0510 18:18:52.620479  408699 main.go:141] libmachine: Making call to close driver server
I0510 18:18:52.620499  408699 main.go:141] libmachine: (functional-581506) Calling .Close
I0510 18:18:52.620802  408699 main.go:141] libmachine: Successfully made call to close driver server
I0510 18:18:52.620825  408699 main.go:141] libmachine: Making call to close connection to plugin binary
I0510 18:18:52.620837  408699 main.go:141] libmachine: Making call to close driver server
I0510 18:18:52.620847  408699 main.go:141] libmachine: (functional-581506) Calling .Close
I0510 18:18:52.621102  408699 main.go:141] libmachine: Successfully made call to close driver server
I0510 18:18:52.621117  408699 main.go:141] libmachine: Making call to close connection to plugin binary
I0510 18:18:52.621115  408699 main.go:141] libmachine: (functional-581506) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-581506 ssh pgrep buildkitd: exit status 1 (213.672819ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 image build -t localhost/my-image:functional-581506 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-581506 image build -t localhost/my-image:functional-581506 testdata/build --alsologtostderr: (3.025087451s)
functional_test.go:337: (dbg) Stdout: out/minikube-linux-amd64 -p functional-581506 image build -t localhost/my-image:functional-581506 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> b6613321534
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-581506
--> 1be27181eeb
Successfully tagged localhost/my-image:functional-581506
1be27181eeb41655ff50498e113b43cb443e525b2acc1a174b8902e29215277a
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-581506 image build -t localhost/my-image:functional-581506 testdata/build --alsologtostderr:
I0510 18:18:52.887254  408814 out.go:345] Setting OutFile to fd 1 ...
I0510 18:18:52.887748  408814 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0510 18:18:52.887807  408814 out.go:358] Setting ErrFile to fd 2...
I0510 18:18:52.887826  408814 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0510 18:18:52.888305  408814 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-388787/.minikube/bin
I0510 18:18:52.889426  408814 config.go:182] Loaded profile config "functional-581506": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
I0510 18:18:52.889999  408814 config.go:182] Loaded profile config "functional-581506": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
I0510 18:18:52.890359  408814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0510 18:18:52.890419  408814 main.go:141] libmachine: Launching plugin server for driver kvm2
I0510 18:18:52.905964  408814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44449
I0510 18:18:52.906436  408814 main.go:141] libmachine: () Calling .GetVersion
I0510 18:18:52.906905  408814 main.go:141] libmachine: Using API Version  1
I0510 18:18:52.906934  408814 main.go:141] libmachine: () Calling .SetConfigRaw
I0510 18:18:52.907362  408814 main.go:141] libmachine: () Calling .GetMachineName
I0510 18:18:52.907598  408814 main.go:141] libmachine: (functional-581506) Calling .GetState
I0510 18:18:52.909331  408814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0510 18:18:52.909372  408814 main.go:141] libmachine: Launching plugin server for driver kvm2
I0510 18:18:52.924381  408814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36719
I0510 18:18:52.924831  408814 main.go:141] libmachine: () Calling .GetVersion
I0510 18:18:52.925288  408814 main.go:141] libmachine: Using API Version  1
I0510 18:18:52.925311  408814 main.go:141] libmachine: () Calling .SetConfigRaw
I0510 18:18:52.925671  408814 main.go:141] libmachine: () Calling .GetMachineName
I0510 18:18:52.925880  408814 main.go:141] libmachine: (functional-581506) Calling .DriverName
I0510 18:18:52.926107  408814 ssh_runner.go:195] Run: systemctl --version
I0510 18:18:52.926135  408814 main.go:141] libmachine: (functional-581506) Calling .GetSSHHostname
I0510 18:18:52.928767  408814 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined MAC address 52:54:00:34:2c:dc in network mk-functional-581506
I0510 18:18:52.929204  408814 main.go:141] libmachine: (functional-581506) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:2c:dc", ip: ""} in network mk-functional-581506: {Iface:virbr1 ExpiryTime:2025-05-10 19:00:46 +0000 UTC Type:0 Mac:52:54:00:34:2c:dc Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-581506 Clientid:01:52:54:00:34:2c:dc}
I0510 18:18:52.929243  408814 main.go:141] libmachine: (functional-581506) DBG | domain functional-581506 has defined IP address 192.168.39.52 and MAC address 52:54:00:34:2c:dc in network mk-functional-581506
I0510 18:18:52.929341  408814 main.go:141] libmachine: (functional-581506) Calling .GetSSHPort
I0510 18:18:52.929504  408814 main.go:141] libmachine: (functional-581506) Calling .GetSSHKeyPath
I0510 18:18:52.929651  408814 main.go:141] libmachine: (functional-581506) Calling .GetSSHUsername
I0510 18:18:52.929830  408814 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/functional-581506/id_rsa Username:docker}
I0510 18:18:53.017903  408814 build_images.go:161] Building image from path: /tmp/build.450203806.tar
I0510 18:18:53.017967  408814 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0510 18:18:53.035171  408814 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.450203806.tar
I0510 18:18:53.040596  408814 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.450203806.tar: stat -c "%s %y" /var/lib/minikube/build/build.450203806.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.450203806.tar': No such file or directory
I0510 18:18:53.040625  408814 ssh_runner.go:362] scp /tmp/build.450203806.tar --> /var/lib/minikube/build/build.450203806.tar (3072 bytes)
I0510 18:18:53.083940  408814 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.450203806
I0510 18:18:53.100740  408814 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.450203806 -xf /var/lib/minikube/build/build.450203806.tar
I0510 18:18:53.113505  408814 crio.go:315] Building image: /var/lib/minikube/build/build.450203806
I0510 18:18:53.113623  408814 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-581506 /var/lib/minikube/build/build.450203806 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0510 18:18:55.806280  408814 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-581506 /var/lib/minikube/build/build.450203806 --cgroup-manager=cgroupfs: (2.692617557s)
I0510 18:18:55.806389  408814 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.450203806
I0510 18:18:55.828002  408814 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.450203806.tar
I0510 18:18:55.859200  408814 build_images.go:217] Built localhost/my-image:functional-581506 from /tmp/build.450203806.tar
I0510 18:18:55.859252  408814 build_images.go:133] succeeded building to: functional-581506
I0510 18:18:55.859257  408814 build_images.go:134] failed building to: 
I0510 18:18:55.859280  408814 main.go:141] libmachine: Making call to close driver server
I0510 18:18:55.859290  408814 main.go:141] libmachine: (functional-581506) Calling .Close
I0510 18:18:55.859670  408814 main.go:141] libmachine: (functional-581506) DBG | Closing plugin on server side
I0510 18:18:55.859706  408814 main.go:141] libmachine: Successfully made call to close driver server
I0510 18:18:55.859735  408814 main.go:141] libmachine: Making call to close connection to plugin binary
I0510 18:18:55.859753  408814 main.go:141] libmachine: Making call to close driver server
I0510 18:18:55.859770  408814 main.go:141] libmachine: (functional-581506) Calling .Close
I0510 18:18:55.860013  408814 main.go:141] libmachine: Successfully made call to close driver server
I0510 18:18:55.860028  408814 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-581506
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 image load --daemon kicbase/echo-server:functional-581506 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-linux-amd64 -p functional-581506 image load --daemon kicbase/echo-server:functional-581506 --alsologtostderr: (1.552980325s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 image load --daemon kicbase/echo-server:functional-581506 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-581506
functional_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 image load --daemon kicbase/echo-server:functional-581506 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 image save kicbase/echo-server:functional-581506 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 image rm kicbase/echo-server:functional-581506 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-581506
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 image save --daemon kicbase/echo-server:functional-581506 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-581506
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "281.798839ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "49.678355ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "290.579872ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "50.045781ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-581506 /tmp/TestFunctionalparallelMountCmdspecific-port44352173/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-581506 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (213.011054ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0510 18:16:02.804228  395980 retry.go:31] will retry after 536.220363ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-581506 /tmp/TestFunctionalparallelMountCmdspecific-port44352173/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-581506 ssh "sudo umount -f /mount-9p": exit status 1 (207.087684ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-581506 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-581506 /tmp/TestFunctionalparallelMountCmdspecific-port44352173/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-581506 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3783465736/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-581506 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3783465736/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-581506 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3783465736/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-581506 ssh "findmnt -T" /mount1: exit status 1 (237.562215ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0510 18:16:04.608783  395980 retry.go:31] will retry after 649.746271ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-581506 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-581506 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3783465736/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-581506 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3783465736/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-581506 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3783465736/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 service list
functional_test.go:1476: (dbg) Done: out/minikube-linux-amd64 -p functional-581506 service list: (1.279975136s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-581506 service list -o json
functional_test.go:1506: (dbg) Done: out/minikube-linux-amd64 -p functional-581506 service list -o json: (1.28064965s)
functional_test.go:1511: Took "1.280762169s" to run "out/minikube-linux-amd64 -p functional-581506 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.28s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-581506
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-581506
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-581506
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (229.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 start --ha --memory 2200 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E0510 18:23:48.489994  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:23:48.496494  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:23:48.507933  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:23:48.529460  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:23:48.570971  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:23:48.652513  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:23:48.814162  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:23:49.135906  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:23:49.778064  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:23:51.059594  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:23:53.621302  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:23:58.743702  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:24:08.985555  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:24:29.467444  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:24:37.810237  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-335528 start --ha --memory 2200 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m48.736131958s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (229.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-335528 kubectl -- rollout status deployment/busybox: (4.157249641s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 kubectl -- exec busybox-58667487b6-45n8j -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 kubectl -- exec busybox-58667487b6-8t9vt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 kubectl -- exec busybox-58667487b6-pj66t -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 kubectl -- exec busybox-58667487b6-45n8j -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 kubectl -- exec busybox-58667487b6-8t9vt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 kubectl -- exec busybox-58667487b6-pj66t -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 kubectl -- exec busybox-58667487b6-45n8j -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 kubectl -- exec busybox-58667487b6-8t9vt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 kubectl -- exec busybox-58667487b6-pj66t -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 kubectl -- exec busybox-58667487b6-45n8j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 kubectl -- exec busybox-58667487b6-45n8j -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 kubectl -- exec busybox-58667487b6-8t9vt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 kubectl -- exec busybox-58667487b6-8t9vt -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 kubectl -- exec busybox-58667487b6-pj66t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 kubectl -- exec busybox-58667487b6-pj66t -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (51.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 node add --alsologtostderr -v 5
E0510 18:25:10.429633  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-335528 node add --alsologtostderr -v 5: (50.405137636s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (51.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-335528 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 status --output json --alsologtostderr -v 5
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 cp testdata/cp-test.txt ha-335528:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 ssh -n ha-335528 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 cp ha-335528:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile449989072/001/cp-test_ha-335528.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 ssh -n ha-335528 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 cp ha-335528:/home/docker/cp-test.txt ha-335528-m02:/home/docker/cp-test_ha-335528_ha-335528-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 ssh -n ha-335528 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 ssh -n ha-335528-m02 "sudo cat /home/docker/cp-test_ha-335528_ha-335528-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 cp ha-335528:/home/docker/cp-test.txt ha-335528-m03:/home/docker/cp-test_ha-335528_ha-335528-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 ssh -n ha-335528 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 ssh -n ha-335528-m03 "sudo cat /home/docker/cp-test_ha-335528_ha-335528-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 cp ha-335528:/home/docker/cp-test.txt ha-335528-m04:/home/docker/cp-test_ha-335528_ha-335528-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 ssh -n ha-335528 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 ssh -n ha-335528-m04 "sudo cat /home/docker/cp-test_ha-335528_ha-335528-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 cp testdata/cp-test.txt ha-335528-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 ssh -n ha-335528-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 cp ha-335528-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile449989072/001/cp-test_ha-335528-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 ssh -n ha-335528-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 cp ha-335528-m02:/home/docker/cp-test.txt ha-335528:/home/docker/cp-test_ha-335528-m02_ha-335528.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 ssh -n ha-335528-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 ssh -n ha-335528 "sudo cat /home/docker/cp-test_ha-335528-m02_ha-335528.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 cp ha-335528-m02:/home/docker/cp-test.txt ha-335528-m03:/home/docker/cp-test_ha-335528-m02_ha-335528-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 ssh -n ha-335528-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 ssh -n ha-335528-m03 "sudo cat /home/docker/cp-test_ha-335528-m02_ha-335528-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 cp ha-335528-m02:/home/docker/cp-test.txt ha-335528-m04:/home/docker/cp-test_ha-335528-m02_ha-335528-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 ssh -n ha-335528-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 ssh -n ha-335528-m04 "sudo cat /home/docker/cp-test_ha-335528-m02_ha-335528-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 cp testdata/cp-test.txt ha-335528-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 ssh -n ha-335528-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 cp ha-335528-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile449989072/001/cp-test_ha-335528-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 ssh -n ha-335528-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 cp ha-335528-m03:/home/docker/cp-test.txt ha-335528:/home/docker/cp-test_ha-335528-m03_ha-335528.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 ssh -n ha-335528-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 ssh -n ha-335528 "sudo cat /home/docker/cp-test_ha-335528-m03_ha-335528.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 cp ha-335528-m03:/home/docker/cp-test.txt ha-335528-m02:/home/docker/cp-test_ha-335528-m03_ha-335528-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 ssh -n ha-335528-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 ssh -n ha-335528-m02 "sudo cat /home/docker/cp-test_ha-335528-m03_ha-335528-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 cp ha-335528-m03:/home/docker/cp-test.txt ha-335528-m04:/home/docker/cp-test_ha-335528-m03_ha-335528-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 ssh -n ha-335528-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 ssh -n ha-335528-m04 "sudo cat /home/docker/cp-test_ha-335528-m03_ha-335528-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 cp testdata/cp-test.txt ha-335528-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 ssh -n ha-335528-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 cp ha-335528-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile449989072/001/cp-test_ha-335528-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 ssh -n ha-335528-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 cp ha-335528-m04:/home/docker/cp-test.txt ha-335528:/home/docker/cp-test_ha-335528-m04_ha-335528.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 ssh -n ha-335528-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 ssh -n ha-335528 "sudo cat /home/docker/cp-test_ha-335528-m04_ha-335528.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 cp ha-335528-m04:/home/docker/cp-test.txt ha-335528-m02:/home/docker/cp-test_ha-335528-m04_ha-335528-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 ssh -n ha-335528-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 ssh -n ha-335528-m02 "sudo cat /home/docker/cp-test_ha-335528-m04_ha-335528-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 cp ha-335528-m04:/home/docker/cp-test.txt ha-335528-m03:/home/docker/cp-test_ha-335528-m04_ha-335528-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 ssh -n ha-335528-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 ssh -n ha-335528-m03 "sudo cat /home/docker/cp-test_ha-335528-m04_ha-335528-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 node stop m02 --alsologtostderr -v 5
E0510 18:26:32.351720  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-335528 node stop m02 --alsologtostderr -v 5: (1m31.023425134s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-335528 status --alsologtostderr -v 5: exit status 7 (728.250082ms)

                                                
                                                
-- stdout --
	ha-335528
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-335528-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-335528-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-335528-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0510 18:27:44.359767  414417 out.go:345] Setting OutFile to fd 1 ...
	I0510 18:27:44.359904  414417 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 18:27:44.359913  414417 out.go:358] Setting ErrFile to fd 2...
	I0510 18:27:44.359917  414417 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 18:27:44.360105  414417 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-388787/.minikube/bin
	I0510 18:27:44.360259  414417 out.go:352] Setting JSON to false
	I0510 18:27:44.360292  414417 mustload.go:65] Loading cluster: ha-335528
	I0510 18:27:44.360436  414417 notify.go:220] Checking for updates...
	I0510 18:27:44.360633  414417 config.go:182] Loaded profile config "ha-335528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 18:27:44.360652  414417 status.go:174] checking status of ha-335528 ...
	I0510 18:27:44.361114  414417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 18:27:44.361153  414417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:27:44.384844  414417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45309
	I0510 18:27:44.385514  414417 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:27:44.386189  414417 main.go:141] libmachine: Using API Version  1
	I0510 18:27:44.386208  414417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:27:44.386629  414417 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:27:44.386831  414417 main.go:141] libmachine: (ha-335528) Calling .GetState
	I0510 18:27:44.388587  414417 status.go:371] ha-335528 host status = "Running" (err=<nil>)
	I0510 18:27:44.388612  414417 host.go:66] Checking if "ha-335528" exists ...
	I0510 18:27:44.388941  414417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 18:27:44.388984  414417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:27:44.404870  414417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35357
	I0510 18:27:44.405409  414417 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:27:44.405963  414417 main.go:141] libmachine: Using API Version  1
	I0510 18:27:44.405991  414417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:27:44.406332  414417 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:27:44.406557  414417 main.go:141] libmachine: (ha-335528) Calling .GetIP
	I0510 18:27:44.409776  414417 main.go:141] libmachine: (ha-335528) DBG | domain ha-335528 has defined MAC address 52:54:00:fc:63:53 in network mk-ha-335528
	I0510 18:27:44.410310  414417 main.go:141] libmachine: (ha-335528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:63:53", ip: ""} in network mk-ha-335528: {Iface:virbr1 ExpiryTime:2025-05-10 19:21:25 +0000 UTC Type:0 Mac:52:54:00:fc:63:53 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-335528 Clientid:01:52:54:00:fc:63:53}
	I0510 18:27:44.410342  414417 main.go:141] libmachine: (ha-335528) DBG | domain ha-335528 has defined IP address 192.168.39.228 and MAC address 52:54:00:fc:63:53 in network mk-ha-335528
	I0510 18:27:44.410479  414417 host.go:66] Checking if "ha-335528" exists ...
	I0510 18:27:44.410796  414417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 18:27:44.410851  414417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:27:44.426611  414417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42053
	I0510 18:27:44.427106  414417 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:27:44.427638  414417 main.go:141] libmachine: Using API Version  1
	I0510 18:27:44.427699  414417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:27:44.428088  414417 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:27:44.428330  414417 main.go:141] libmachine: (ha-335528) Calling .DriverName
	I0510 18:27:44.428569  414417 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0510 18:27:44.428617  414417 main.go:141] libmachine: (ha-335528) Calling .GetSSHHostname
	I0510 18:27:44.432018  414417 main.go:141] libmachine: (ha-335528) DBG | domain ha-335528 has defined MAC address 52:54:00:fc:63:53 in network mk-ha-335528
	I0510 18:27:44.432458  414417 main.go:141] libmachine: (ha-335528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:63:53", ip: ""} in network mk-ha-335528: {Iface:virbr1 ExpiryTime:2025-05-10 19:21:25 +0000 UTC Type:0 Mac:52:54:00:fc:63:53 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-335528 Clientid:01:52:54:00:fc:63:53}
	I0510 18:27:44.432474  414417 main.go:141] libmachine: (ha-335528) DBG | domain ha-335528 has defined IP address 192.168.39.228 and MAC address 52:54:00:fc:63:53 in network mk-ha-335528
	I0510 18:27:44.432674  414417 main.go:141] libmachine: (ha-335528) Calling .GetSSHPort
	I0510 18:27:44.432848  414417 main.go:141] libmachine: (ha-335528) Calling .GetSSHKeyPath
	I0510 18:27:44.433021  414417 main.go:141] libmachine: (ha-335528) Calling .GetSSHUsername
	I0510 18:27:44.433171  414417 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/ha-335528/id_rsa Username:docker}
	I0510 18:27:44.525772  414417 ssh_runner.go:195] Run: systemctl --version
	I0510 18:27:44.533307  414417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0510 18:27:44.556435  414417 kubeconfig.go:125] found "ha-335528" server: "https://192.168.39.254:8443"
	I0510 18:27:44.556486  414417 api_server.go:166] Checking apiserver status ...
	I0510 18:27:44.556526  414417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 18:27:44.582091  414417 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1491/cgroup
	W0510 18:27:44.596856  414417 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1491/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0510 18:27:44.596914  414417 ssh_runner.go:195] Run: ls
	I0510 18:27:44.603056  414417 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0510 18:27:44.609988  414417 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0510 18:27:44.610018  414417 status.go:463] ha-335528 apiserver status = Running (err=<nil>)
	I0510 18:27:44.610029  414417 status.go:176] ha-335528 status: &{Name:ha-335528 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0510 18:27:44.610046  414417 status.go:174] checking status of ha-335528-m02 ...
	I0510 18:27:44.610382  414417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 18:27:44.610425  414417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:27:44.627104  414417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45237
	I0510 18:27:44.627574  414417 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:27:44.628075  414417 main.go:141] libmachine: Using API Version  1
	I0510 18:27:44.628116  414417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:27:44.628447  414417 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:27:44.628656  414417 main.go:141] libmachine: (ha-335528-m02) Calling .GetState
	I0510 18:27:44.630465  414417 status.go:371] ha-335528-m02 host status = "Stopped" (err=<nil>)
	I0510 18:27:44.630481  414417 status.go:384] host is not running, skipping remaining checks
	I0510 18:27:44.630486  414417 status.go:176] ha-335528-m02 status: &{Name:ha-335528-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0510 18:27:44.630505  414417 status.go:174] checking status of ha-335528-m03 ...
	I0510 18:27:44.630848  414417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 18:27:44.630898  414417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:27:44.648018  414417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35421
	I0510 18:27:44.648639  414417 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:27:44.649269  414417 main.go:141] libmachine: Using API Version  1
	I0510 18:27:44.649287  414417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:27:44.649666  414417 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:27:44.649910  414417 main.go:141] libmachine: (ha-335528-m03) Calling .GetState
	I0510 18:27:44.651610  414417 status.go:371] ha-335528-m03 host status = "Running" (err=<nil>)
	I0510 18:27:44.651627  414417 host.go:66] Checking if "ha-335528-m03" exists ...
	I0510 18:27:44.651981  414417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 18:27:44.652025  414417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:27:44.667006  414417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41693
	I0510 18:27:44.667575  414417 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:27:44.668066  414417 main.go:141] libmachine: Using API Version  1
	I0510 18:27:44.668090  414417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:27:44.668405  414417 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:27:44.668569  414417 main.go:141] libmachine: (ha-335528-m03) Calling .GetIP
	I0510 18:27:44.671207  414417 main.go:141] libmachine: (ha-335528-m03) DBG | domain ha-335528-m03 has defined MAC address 52:54:00:30:5d:26 in network mk-ha-335528
	I0510 18:27:44.671692  414417 main.go:141] libmachine: (ha-335528-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:5d:26", ip: ""} in network mk-ha-335528: {Iface:virbr1 ExpiryTime:2025-05-10 19:23:44 +0000 UTC Type:0 Mac:52:54:00:30:5d:26 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:ha-335528-m03 Clientid:01:52:54:00:30:5d:26}
	I0510 18:27:44.671722  414417 main.go:141] libmachine: (ha-335528-m03) DBG | domain ha-335528-m03 has defined IP address 192.168.39.213 and MAC address 52:54:00:30:5d:26 in network mk-ha-335528
	I0510 18:27:44.671902  414417 host.go:66] Checking if "ha-335528-m03" exists ...
	I0510 18:27:44.672218  414417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 18:27:44.672254  414417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:27:44.688325  414417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39531
	I0510 18:27:44.688802  414417 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:27:44.689199  414417 main.go:141] libmachine: Using API Version  1
	I0510 18:27:44.689223  414417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:27:44.689539  414417 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:27:44.689712  414417 main.go:141] libmachine: (ha-335528-m03) Calling .DriverName
	I0510 18:27:44.689978  414417 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0510 18:27:44.690003  414417 main.go:141] libmachine: (ha-335528-m03) Calling .GetSSHHostname
	I0510 18:27:44.693102  414417 main.go:141] libmachine: (ha-335528-m03) DBG | domain ha-335528-m03 has defined MAC address 52:54:00:30:5d:26 in network mk-ha-335528
	I0510 18:27:44.693499  414417 main.go:141] libmachine: (ha-335528-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:5d:26", ip: ""} in network mk-ha-335528: {Iface:virbr1 ExpiryTime:2025-05-10 19:23:44 +0000 UTC Type:0 Mac:52:54:00:30:5d:26 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:ha-335528-m03 Clientid:01:52:54:00:30:5d:26}
	I0510 18:27:44.693527  414417 main.go:141] libmachine: (ha-335528-m03) DBG | domain ha-335528-m03 has defined IP address 192.168.39.213 and MAC address 52:54:00:30:5d:26 in network mk-ha-335528
	I0510 18:27:44.693689  414417 main.go:141] libmachine: (ha-335528-m03) Calling .GetSSHPort
	I0510 18:27:44.693927  414417 main.go:141] libmachine: (ha-335528-m03) Calling .GetSSHKeyPath
	I0510 18:27:44.694111  414417 main.go:141] libmachine: (ha-335528-m03) Calling .GetSSHUsername
	I0510 18:27:44.694328  414417 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/ha-335528-m03/id_rsa Username:docker}
	I0510 18:27:44.790408  414417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0510 18:27:44.818172  414417 kubeconfig.go:125] found "ha-335528" server: "https://192.168.39.254:8443"
	I0510 18:27:44.818209  414417 api_server.go:166] Checking apiserver status ...
	I0510 18:27:44.818257  414417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 18:27:44.839495  414417 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1763/cgroup
	W0510 18:27:44.850809  414417 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1763/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0510 18:27:44.850903  414417 ssh_runner.go:195] Run: ls
	I0510 18:27:44.857076  414417 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0510 18:27:44.861867  414417 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0510 18:27:44.861905  414417 status.go:463] ha-335528-m03 apiserver status = Running (err=<nil>)
	I0510 18:27:44.861918  414417 status.go:176] ha-335528-m03 status: &{Name:ha-335528-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0510 18:27:44.861935  414417 status.go:174] checking status of ha-335528-m04 ...
	I0510 18:27:44.862364  414417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 18:27:44.862411  414417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:27:44.878718  414417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37777
	I0510 18:27:44.879286  414417 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:27:44.879707  414417 main.go:141] libmachine: Using API Version  1
	I0510 18:27:44.879726  414417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:27:44.880171  414417 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:27:44.880471  414417 main.go:141] libmachine: (ha-335528-m04) Calling .GetState
	I0510 18:27:44.882164  414417 status.go:371] ha-335528-m04 host status = "Running" (err=<nil>)
	I0510 18:27:44.882186  414417 host.go:66] Checking if "ha-335528-m04" exists ...
	I0510 18:27:44.882600  414417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 18:27:44.882653  414417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:27:44.899032  414417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38799
	I0510 18:27:44.899501  414417 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:27:44.900021  414417 main.go:141] libmachine: Using API Version  1
	I0510 18:27:44.900043  414417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:27:44.900382  414417 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:27:44.900641  414417 main.go:141] libmachine: (ha-335528-m04) Calling .GetIP
	I0510 18:27:44.904238  414417 main.go:141] libmachine: (ha-335528-m04) DBG | domain ha-335528-m04 has defined MAC address 52:54:00:25:8f:fd in network mk-ha-335528
	I0510 18:27:44.904692  414417 main.go:141] libmachine: (ha-335528-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8f:fd", ip: ""} in network mk-ha-335528: {Iface:virbr1 ExpiryTime:2025-05-10 19:25:23 +0000 UTC Type:0 Mac:52:54:00:25:8f:fd Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-335528-m04 Clientid:01:52:54:00:25:8f:fd}
	I0510 18:27:44.904715  414417 main.go:141] libmachine: (ha-335528-m04) DBG | domain ha-335528-m04 has defined IP address 192.168.39.31 and MAC address 52:54:00:25:8f:fd in network mk-ha-335528
	I0510 18:27:44.904908  414417 host.go:66] Checking if "ha-335528-m04" exists ...
	I0510 18:27:44.905331  414417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 18:27:44.905378  414417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:27:44.921664  414417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43415
	I0510 18:27:44.922211  414417 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:27:44.922763  414417 main.go:141] libmachine: Using API Version  1
	I0510 18:27:44.922788  414417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:27:44.923388  414417 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:27:44.923625  414417 main.go:141] libmachine: (ha-335528-m04) Calling .DriverName
	I0510 18:27:44.923849  414417 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0510 18:27:44.923883  414417 main.go:141] libmachine: (ha-335528-m04) Calling .GetSSHHostname
	I0510 18:27:44.926592  414417 main.go:141] libmachine: (ha-335528-m04) DBG | domain ha-335528-m04 has defined MAC address 52:54:00:25:8f:fd in network mk-ha-335528
	I0510 18:27:44.927022  414417 main.go:141] libmachine: (ha-335528-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8f:fd", ip: ""} in network mk-ha-335528: {Iface:virbr1 ExpiryTime:2025-05-10 19:25:23 +0000 UTC Type:0 Mac:52:54:00:25:8f:fd Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-335528-m04 Clientid:01:52:54:00:25:8f:fd}
	I0510 18:27:44.927048  414417 main.go:141] libmachine: (ha-335528-m04) DBG | domain ha-335528-m04 has defined IP address 192.168.39.31 and MAC address 52:54:00:25:8f:fd in network mk-ha-335528
	I0510 18:27:44.927269  414417 main.go:141] libmachine: (ha-335528-m04) Calling .GetSSHPort
	I0510 18:27:44.927466  414417 main.go:141] libmachine: (ha-335528-m04) Calling .GetSSHKeyPath
	I0510 18:27:44.927616  414417 main.go:141] libmachine: (ha-335528-m04) Calling .GetSSHUsername
	I0510 18:27:44.927765  414417 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/ha-335528-m04/id_rsa Username:docker}
	I0510 18:27:45.018019  414417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0510 18:27:45.038187  414417 status.go:176] ha-335528-m04 status: &{Name:ha-335528-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (36.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-335528 node start m02 --alsologtostderr -v 5: (35.093198759s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-335528 status --alsologtostderr -v 5: (1.085812888s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (36.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.047172462s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (411.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 stop --alsologtostderr -v 5
E0510 18:28:48.489375  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:29:16.194167  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:29:37.810200  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:32:40.889131  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-335528 stop --alsologtostderr -v 5: (4m35.223832998s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 start --wait true --alsologtostderr -v 5
E0510 18:33:48.489027  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:34:37.810523  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-335528 start --wait true --alsologtostderr -v 5: (2m16.267844398s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (411.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (19.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-335528 node delete m03 --alsologtostderr -v 5: (18.735081081s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (19.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (273.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 stop --alsologtostderr -v 5
E0510 18:38:48.489679  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:39:37.810352  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-335528 stop --alsologtostderr -v 5: (4m32.955878158s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-335528 status --alsologtostderr -v 5: exit status 7 (116.501843ms)

                                                
                                                
-- stdout --
	ha-335528
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-335528-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-335528-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0510 18:40:07.962470  418511 out.go:345] Setting OutFile to fd 1 ...
	I0510 18:40:07.962570  418511 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 18:40:07.962575  418511 out.go:358] Setting ErrFile to fd 2...
	I0510 18:40:07.962579  418511 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 18:40:07.962792  418511 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-388787/.minikube/bin
	I0510 18:40:07.962974  418511 out.go:352] Setting JSON to false
	I0510 18:40:07.963008  418511 mustload.go:65] Loading cluster: ha-335528
	I0510 18:40:07.963103  418511 notify.go:220] Checking for updates...
	I0510 18:40:07.963390  418511 config.go:182] Loaded profile config "ha-335528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 18:40:07.963413  418511 status.go:174] checking status of ha-335528 ...
	I0510 18:40:07.963899  418511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 18:40:07.963950  418511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:40:07.988374  418511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38919
	I0510 18:40:07.988916  418511 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:40:07.989534  418511 main.go:141] libmachine: Using API Version  1
	I0510 18:40:07.989561  418511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:40:07.989951  418511 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:40:07.990165  418511 main.go:141] libmachine: (ha-335528) Calling .GetState
	I0510 18:40:07.991800  418511 status.go:371] ha-335528 host status = "Stopped" (err=<nil>)
	I0510 18:40:07.991818  418511 status.go:384] host is not running, skipping remaining checks
	I0510 18:40:07.991826  418511 status.go:176] ha-335528 status: &{Name:ha-335528 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0510 18:40:07.991850  418511 status.go:174] checking status of ha-335528-m02 ...
	I0510 18:40:07.992273  418511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 18:40:07.992321  418511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:40:08.007646  418511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42115
	I0510 18:40:08.008115  418511 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:40:08.008585  418511 main.go:141] libmachine: Using API Version  1
	I0510 18:40:08.008608  418511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:40:08.009080  418511 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:40:08.009268  418511 main.go:141] libmachine: (ha-335528-m02) Calling .GetState
	I0510 18:40:08.011172  418511 status.go:371] ha-335528-m02 host status = "Stopped" (err=<nil>)
	I0510 18:40:08.011194  418511 status.go:384] host is not running, skipping remaining checks
	I0510 18:40:08.011201  418511 status.go:176] ha-335528-m02 status: &{Name:ha-335528-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0510 18:40:08.011222  418511 status.go:174] checking status of ha-335528-m04 ...
	I0510 18:40:08.011532  418511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 18:40:08.011574  418511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:40:08.026680  418511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39471
	I0510 18:40:08.027214  418511 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:40:08.027741  418511 main.go:141] libmachine: Using API Version  1
	I0510 18:40:08.027767  418511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:40:08.028099  418511 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:40:08.028256  418511 main.go:141] libmachine: (ha-335528-m04) Calling .GetState
	I0510 18:40:08.029948  418511 status.go:371] ha-335528-m04 host status = "Stopped" (err=<nil>)
	I0510 18:40:08.029967  418511 status.go:384] host is not running, skipping remaining checks
	I0510 18:40:08.029974  418511 status.go:176] ha-335528-m04 status: &{Name:ha-335528-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (273.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (126.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E0510 18:40:11.556052  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-335528 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (2m5.942603029s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (126.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (114.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 node add --control-plane --alsologtostderr -v 5
E0510 18:43:48.489334  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-335528 node add --control-plane --alsologtostderr -v 5: (1m53.711893954s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-335528 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (114.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.96s)

                                                
                                    
x
+
TestJSONOutput/start/Command (88.11s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-132514 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0510 18:44:37.819264  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-132514 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m28.106355251s)
--- PASS: TestJSONOutput/start/Command (88.11s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.81s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-132514 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.81s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-132514 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-132514 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-132514 --output=json --user=testUser: (7.3473931s)
--- PASS: TestJSONOutput/stop/Command (7.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-402374 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-402374 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (68.356057ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"07fadf08-09b5-45b7-821c-d7f55fa1b414","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-402374] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bf2c4a4d-c35f-4261-acf6-530a3c7a484a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20720"}}
	{"specversion":"1.0","id":"1f869b22-2533-4abc-ab84-ff17a0f357c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"440170e0-65f5-4baa-8ff7-2df6c222da92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20720-388787/kubeconfig"}}
	{"specversion":"1.0","id":"a1f42ce4-f6d0-49f5-9125-1b8864a0a6c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-388787/.minikube"}}
	{"specversion":"1.0","id":"011c9097-e614-4641-af80-0791b26aae66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0d191f79-51f6-4fd4-bdad-0b4c314fa073","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a659933c-d32b-46d9-a883-e15cf99ecfc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-402374" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-402374
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (99.47s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-215182 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-215182 --driver=kvm2  --container-runtime=crio: (48.627599652s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-230981 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-230981 --driver=kvm2  --container-runtime=crio: (48.030737763s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-215182
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-230981
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-230981" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-230981
helpers_test.go:175: Cleaning up "first-215182" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-215182
--- PASS: TestMinikubeProfile (99.47s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (32.08s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-168671 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-168671 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (31.07681624s)
--- PASS: TestMountStart/serial/StartWithMountFirst (32.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-168671 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-168671 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-194371 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-194371 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.160701817s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-194371 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-194371 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.9s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-168671 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-194371 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-194371 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.43s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-194371
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-194371: (1.429041606s)
--- PASS: TestMountStart/serial/Stop (1.43s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.27s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-194371
E0510 18:48:48.489765  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-194371: (23.273893725s)
--- PASS: TestMountStart/serial/RestartStopped (24.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-194371 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-194371 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.43s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (114.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-247612 --wait=true --memory=2200 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0510 18:49:20.890685  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:49:37.815453  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-247612 --wait=true --memory=2200 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m54.327003835s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (114.78s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-247612 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-247612 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-247612 -- rollout status deployment/busybox: (3.788327304s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-247612 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-247612 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-247612 -- exec busybox-58667487b6-5xmvv -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-247612 -- exec busybox-58667487b6-gtz7w -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-247612 -- exec busybox-58667487b6-5xmvv -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-247612 -- exec busybox-58667487b6-gtz7w -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-247612 -- exec busybox-58667487b6-5xmvv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-247612 -- exec busybox-58667487b6-gtz7w -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.38s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-247612 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-247612 -- exec busybox-58667487b6-5xmvv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-247612 -- exec busybox-58667487b6-5xmvv -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-247612 -- exec busybox-58667487b6-gtz7w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-247612 -- exec busybox-58667487b6-gtz7w -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (48.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-247612 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-247612 -v=5 --alsologtostderr: (47.529228073s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (48.16s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-247612 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.63s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 cp testdata/cp-test.txt multinode-247612:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 ssh -n multinode-247612 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 cp multinode-247612:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2119965336/001/cp-test_multinode-247612.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 ssh -n multinode-247612 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 cp multinode-247612:/home/docker/cp-test.txt multinode-247612-m02:/home/docker/cp-test_multinode-247612_multinode-247612-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 ssh -n multinode-247612 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 ssh -n multinode-247612-m02 "sudo cat /home/docker/cp-test_multinode-247612_multinode-247612-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 cp multinode-247612:/home/docker/cp-test.txt multinode-247612-m03:/home/docker/cp-test_multinode-247612_multinode-247612-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 ssh -n multinode-247612 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 ssh -n multinode-247612-m03 "sudo cat /home/docker/cp-test_multinode-247612_multinode-247612-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 cp testdata/cp-test.txt multinode-247612-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 ssh -n multinode-247612-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 cp multinode-247612-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2119965336/001/cp-test_multinode-247612-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 ssh -n multinode-247612-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 cp multinode-247612-m02:/home/docker/cp-test.txt multinode-247612:/home/docker/cp-test_multinode-247612-m02_multinode-247612.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 ssh -n multinode-247612-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 ssh -n multinode-247612 "sudo cat /home/docker/cp-test_multinode-247612-m02_multinode-247612.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 cp multinode-247612-m02:/home/docker/cp-test.txt multinode-247612-m03:/home/docker/cp-test_multinode-247612-m02_multinode-247612-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 ssh -n multinode-247612-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 ssh -n multinode-247612-m03 "sudo cat /home/docker/cp-test_multinode-247612-m02_multinode-247612-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 cp testdata/cp-test.txt multinode-247612-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 ssh -n multinode-247612-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 cp multinode-247612-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2119965336/001/cp-test_multinode-247612-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 ssh -n multinode-247612-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 cp multinode-247612-m03:/home/docker/cp-test.txt multinode-247612:/home/docker/cp-test_multinode-247612-m03_multinode-247612.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 ssh -n multinode-247612-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 ssh -n multinode-247612 "sudo cat /home/docker/cp-test_multinode-247612-m03_multinode-247612.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 cp multinode-247612-m03:/home/docker/cp-test.txt multinode-247612-m02:/home/docker/cp-test_multinode-247612-m03_multinode-247612-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 ssh -n multinode-247612-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 ssh -n multinode-247612-m02 "sudo cat /home/docker/cp-test_multinode-247612-m03_multinode-247612-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.76s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-247612 node stop m03: (2.301152096s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-247612 status: exit status 7 (446.119393ms)

                                                
                                                
-- stdout --
	multinode-247612
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-247612-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-247612-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-247612 status --alsologtostderr: exit status 7 (460.316419ms)

                                                
                                                
-- stdout --
	multinode-247612
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-247612-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-247612-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0510 18:52:00.785328  426513 out.go:345] Setting OutFile to fd 1 ...
	I0510 18:52:00.785460  426513 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 18:52:00.785469  426513 out.go:358] Setting ErrFile to fd 2...
	I0510 18:52:00.785473  426513 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 18:52:00.785688  426513 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-388787/.minikube/bin
	I0510 18:52:00.785837  426513 out.go:352] Setting JSON to false
	I0510 18:52:00.785871  426513 mustload.go:65] Loading cluster: multinode-247612
	I0510 18:52:00.785925  426513 notify.go:220] Checking for updates...
	I0510 18:52:00.786230  426513 config.go:182] Loaded profile config "multinode-247612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 18:52:00.786249  426513 status.go:174] checking status of multinode-247612 ...
	I0510 18:52:00.786645  426513 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 18:52:00.786686  426513 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:52:00.803531  426513 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44317
	I0510 18:52:00.803988  426513 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:52:00.804597  426513 main.go:141] libmachine: Using API Version  1
	I0510 18:52:00.804616  426513 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:52:00.805008  426513 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:52:00.805243  426513 main.go:141] libmachine: (multinode-247612) Calling .GetState
	I0510 18:52:00.807095  426513 status.go:371] multinode-247612 host status = "Running" (err=<nil>)
	I0510 18:52:00.807117  426513 host.go:66] Checking if "multinode-247612" exists ...
	I0510 18:52:00.807465  426513 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 18:52:00.807507  426513 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:52:00.822952  426513 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41507
	I0510 18:52:00.823432  426513 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:52:00.824045  426513 main.go:141] libmachine: Using API Version  1
	I0510 18:52:00.824077  426513 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:52:00.824453  426513 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:52:00.824673  426513 main.go:141] libmachine: (multinode-247612) Calling .GetIP
	I0510 18:52:00.827311  426513 main.go:141] libmachine: (multinode-247612) DBG | domain multinode-247612 has defined MAC address 52:54:00:b4:e1:e3 in network mk-multinode-247612
	I0510 18:52:00.827747  426513 main.go:141] libmachine: (multinode-247612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e1:e3", ip: ""} in network mk-multinode-247612: {Iface:virbr1 ExpiryTime:2025-05-10 19:49:16 +0000 UTC Type:0 Mac:52:54:00:b4:e1:e3 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-247612 Clientid:01:52:54:00:b4:e1:e3}
	I0510 18:52:00.827790  426513 main.go:141] libmachine: (multinode-247612) DBG | domain multinode-247612 has defined IP address 192.168.39.246 and MAC address 52:54:00:b4:e1:e3 in network mk-multinode-247612
	I0510 18:52:00.827881  426513 host.go:66] Checking if "multinode-247612" exists ...
	I0510 18:52:00.828186  426513 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 18:52:00.828231  426513 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:52:00.844707  426513 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38061
	I0510 18:52:00.845304  426513 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:52:00.845760  426513 main.go:141] libmachine: Using API Version  1
	I0510 18:52:00.845784  426513 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:52:00.846167  426513 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:52:00.846359  426513 main.go:141] libmachine: (multinode-247612) Calling .DriverName
	I0510 18:52:00.846555  426513 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0510 18:52:00.846589  426513 main.go:141] libmachine: (multinode-247612) Calling .GetSSHHostname
	I0510 18:52:00.849434  426513 main.go:141] libmachine: (multinode-247612) DBG | domain multinode-247612 has defined MAC address 52:54:00:b4:e1:e3 in network mk-multinode-247612
	I0510 18:52:00.849894  426513 main.go:141] libmachine: (multinode-247612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e1:e3", ip: ""} in network mk-multinode-247612: {Iface:virbr1 ExpiryTime:2025-05-10 19:49:16 +0000 UTC Type:0 Mac:52:54:00:b4:e1:e3 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-247612 Clientid:01:52:54:00:b4:e1:e3}
	I0510 18:52:00.849928  426513 main.go:141] libmachine: (multinode-247612) DBG | domain multinode-247612 has defined IP address 192.168.39.246 and MAC address 52:54:00:b4:e1:e3 in network mk-multinode-247612
	I0510 18:52:00.850074  426513 main.go:141] libmachine: (multinode-247612) Calling .GetSSHPort
	I0510 18:52:00.850248  426513 main.go:141] libmachine: (multinode-247612) Calling .GetSSHKeyPath
	I0510 18:52:00.850376  426513 main.go:141] libmachine: (multinode-247612) Calling .GetSSHUsername
	I0510 18:52:00.850515  426513 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/multinode-247612/id_rsa Username:docker}
	I0510 18:52:00.939945  426513 ssh_runner.go:195] Run: systemctl --version
	I0510 18:52:00.946517  426513 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0510 18:52:00.969930  426513 kubeconfig.go:125] found "multinode-247612" server: "https://192.168.39.246:8443"
	I0510 18:52:00.969978  426513 api_server.go:166] Checking apiserver status ...
	I0510 18:52:00.970018  426513 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 18:52:00.990957  426513 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1470/cgroup
	W0510 18:52:01.003378  426513 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1470/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0510 18:52:01.003449  426513 ssh_runner.go:195] Run: ls
	I0510 18:52:01.010515  426513 api_server.go:253] Checking apiserver healthz at https://192.168.39.246:8443/healthz ...
	I0510 18:52:01.015189  426513 api_server.go:279] https://192.168.39.246:8443/healthz returned 200:
	ok
	I0510 18:52:01.015221  426513 status.go:463] multinode-247612 apiserver status = Running (err=<nil>)
	I0510 18:52:01.015246  426513 status.go:176] multinode-247612 status: &{Name:multinode-247612 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0510 18:52:01.015284  426513 status.go:174] checking status of multinode-247612-m02 ...
	I0510 18:52:01.015680  426513 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 18:52:01.015729  426513 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:52:01.031620  426513 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42573
	I0510 18:52:01.032150  426513 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:52:01.032709  426513 main.go:141] libmachine: Using API Version  1
	I0510 18:52:01.032737  426513 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:52:01.033128  426513 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:52:01.033351  426513 main.go:141] libmachine: (multinode-247612-m02) Calling .GetState
	I0510 18:52:01.034874  426513 status.go:371] multinode-247612-m02 host status = "Running" (err=<nil>)
	I0510 18:52:01.034893  426513 host.go:66] Checking if "multinode-247612-m02" exists ...
	I0510 18:52:01.035452  426513 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 18:52:01.035532  426513 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:52:01.051661  426513 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33511
	I0510 18:52:01.052094  426513 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:52:01.052508  426513 main.go:141] libmachine: Using API Version  1
	I0510 18:52:01.052526  426513 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:52:01.052882  426513 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:52:01.053065  426513 main.go:141] libmachine: (multinode-247612-m02) Calling .GetIP
	I0510 18:52:01.056009  426513 main.go:141] libmachine: (multinode-247612-m02) DBG | domain multinode-247612-m02 has defined MAC address 52:54:00:d4:6e:db in network mk-multinode-247612
	I0510 18:52:01.056463  426513 main.go:141] libmachine: (multinode-247612-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:6e:db", ip: ""} in network mk-multinode-247612: {Iface:virbr1 ExpiryTime:2025-05-10 19:50:19 +0000 UTC Type:0 Mac:52:54:00:d4:6e:db Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:multinode-247612-m02 Clientid:01:52:54:00:d4:6e:db}
	I0510 18:52:01.056507  426513 main.go:141] libmachine: (multinode-247612-m02) DBG | domain multinode-247612-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:d4:6e:db in network mk-multinode-247612
	I0510 18:52:01.056635  426513 host.go:66] Checking if "multinode-247612-m02" exists ...
	I0510 18:52:01.057049  426513 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 18:52:01.057095  426513 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:52:01.072700  426513 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38633
	I0510 18:52:01.073202  426513 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:52:01.073625  426513 main.go:141] libmachine: Using API Version  1
	I0510 18:52:01.073661  426513 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:52:01.074013  426513 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:52:01.074257  426513 main.go:141] libmachine: (multinode-247612-m02) Calling .DriverName
	I0510 18:52:01.074483  426513 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0510 18:52:01.074512  426513 main.go:141] libmachine: (multinode-247612-m02) Calling .GetSSHHostname
	I0510 18:52:01.078179  426513 main.go:141] libmachine: (multinode-247612-m02) DBG | domain multinode-247612-m02 has defined MAC address 52:54:00:d4:6e:db in network mk-multinode-247612
	I0510 18:52:01.078653  426513 main.go:141] libmachine: (multinode-247612-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:6e:db", ip: ""} in network mk-multinode-247612: {Iface:virbr1 ExpiryTime:2025-05-10 19:50:19 +0000 UTC Type:0 Mac:52:54:00:d4:6e:db Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:multinode-247612-m02 Clientid:01:52:54:00:d4:6e:db}
	I0510 18:52:01.078682  426513 main.go:141] libmachine: (multinode-247612-m02) DBG | domain multinode-247612-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:d4:6e:db in network mk-multinode-247612
	I0510 18:52:01.078908  426513 main.go:141] libmachine: (multinode-247612-m02) Calling .GetSSHPort
	I0510 18:52:01.079109  426513 main.go:141] libmachine: (multinode-247612-m02) Calling .GetSSHKeyPath
	I0510 18:52:01.079305  426513 main.go:141] libmachine: (multinode-247612-m02) Calling .GetSSHUsername
	I0510 18:52:01.079504  426513 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-388787/.minikube/machines/multinode-247612-m02/id_rsa Username:docker}
	I0510 18:52:01.160016  426513 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0510 18:52:01.177140  426513 status.go:176] multinode-247612-m02 status: &{Name:multinode-247612-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0510 18:52:01.177191  426513 status.go:174] checking status of multinode-247612-m03 ...
	I0510 18:52:01.177624  426513 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 18:52:01.177676  426513 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:52:01.194353  426513 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35497
	I0510 18:52:01.194934  426513 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:52:01.195471  426513 main.go:141] libmachine: Using API Version  1
	I0510 18:52:01.195495  426513 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:52:01.195900  426513 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:52:01.196103  426513 main.go:141] libmachine: (multinode-247612-m03) Calling .GetState
	I0510 18:52:01.197560  426513 status.go:371] multinode-247612-m03 host status = "Stopped" (err=<nil>)
	I0510 18:52:01.197575  426513 status.go:384] host is not running, skipping remaining checks
	I0510 18:52:01.197581  426513 status.go:176] multinode-247612-m03 status: &{Name:multinode-247612-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.21s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-247612 node start m03 -v=5 --alsologtostderr: (38.055788151s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.74s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (327.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-247612
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-247612
E0510 18:53:48.489688  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:54:37.818881  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-247612: (3m4.225202639s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-247612 --wait=true -v=5 --alsologtostderr
E0510 18:56:51.559357  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-247612 --wait=true -v=5 --alsologtostderr: (2m23.64178739s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-247612
--- PASS: TestMultiNode/serial/RestartKeepsNodes (327.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-247612 node delete m03: (2.340211607s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.96s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (181.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 stop
E0510 18:58:48.489436  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:59:37.810576  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-247612 stop: (3m1.784511156s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-247612 status: exit status 7 (97.469832ms)

                                                
                                                
-- stdout --
	multinode-247612
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-247612-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-247612 status --alsologtostderr: exit status 7 (86.878788ms)

                                                
                                                
-- stdout --
	multinode-247612
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-247612-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0510 19:01:12.792138  429446 out.go:345] Setting OutFile to fd 1 ...
	I0510 19:01:12.792420  429446 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 19:01:12.792431  429446 out.go:358] Setting ErrFile to fd 2...
	I0510 19:01:12.792437  429446 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 19:01:12.792657  429446 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-388787/.minikube/bin
	I0510 19:01:12.792827  429446 out.go:352] Setting JSON to false
	I0510 19:01:12.792883  429446 mustload.go:65] Loading cluster: multinode-247612
	I0510 19:01:12.792954  429446 notify.go:220] Checking for updates...
	I0510 19:01:12.793281  429446 config.go:182] Loaded profile config "multinode-247612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 19:01:12.793311  429446 status.go:174] checking status of multinode-247612 ...
	I0510 19:01:12.793757  429446 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:01:12.793808  429446 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:01:12.809280  429446 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45579
	I0510 19:01:12.809877  429446 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:01:12.810597  429446 main.go:141] libmachine: Using API Version  1
	I0510 19:01:12.810636  429446 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:01:12.811142  429446 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:01:12.811373  429446 main.go:141] libmachine: (multinode-247612) Calling .GetState
	I0510 19:01:12.813161  429446 status.go:371] multinode-247612 host status = "Stopped" (err=<nil>)
	I0510 19:01:12.813178  429446 status.go:384] host is not running, skipping remaining checks
	I0510 19:01:12.813184  429446 status.go:176] multinode-247612 status: &{Name:multinode-247612 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0510 19:01:12.813218  429446 status.go:174] checking status of multinode-247612-m02 ...
	I0510 19:01:12.813541  429446 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0510 19:01:12.813624  429446 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 19:01:12.828951  429446 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36357
	I0510 19:01:12.829397  429446 main.go:141] libmachine: () Calling .GetVersion
	I0510 19:01:12.829904  429446 main.go:141] libmachine: Using API Version  1
	I0510 19:01:12.829932  429446 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 19:01:12.830299  429446 main.go:141] libmachine: () Calling .GetMachineName
	I0510 19:01:12.830484  429446 main.go:141] libmachine: (multinode-247612-m02) Calling .GetState
	I0510 19:01:12.832246  429446 status.go:371] multinode-247612-m02 host status = "Stopped" (err=<nil>)
	I0510 19:01:12.832262  429446 status.go:384] host is not running, skipping remaining checks
	I0510 19:01:12.832267  429446 status.go:176] multinode-247612-m02 status: &{Name:multinode-247612-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (181.97s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (91.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-247612 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-247612 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m30.418058901s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247612 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (91.02s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (48.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-247612
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-247612-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-247612-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (76.269976ms)

                                                
                                                
-- stdout --
	* [multinode-247612-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20720
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20720-388787/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-388787/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-247612-m02' is duplicated with machine name 'multinode-247612-m02' in profile 'multinode-247612'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-247612-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-247612-m03 --driver=kvm2  --container-runtime=crio: (47.0079852s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-247612
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-247612: exit status 80 (241.704737ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-247612 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-247612-m03 already exists in multinode-247612-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-247612-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (48.37s)

                                                
                                    
x
+
TestScheduledStopUnix (115.54s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-549555 --memory=2048 --driver=kvm2  --container-runtime=crio
E0510 19:08:48.489378  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-549555 --memory=2048 --driver=kvm2  --container-runtime=crio: (43.74134047s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-549555 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-549555 -n scheduled-stop-549555
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-549555 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0510 19:09:07.372296  395980 retry.go:31] will retry after 125.993µs: open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/scheduled-stop-549555/pid: no such file or directory
I0510 19:09:07.373447  395980 retry.go:31] will retry after 178.951µs: open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/scheduled-stop-549555/pid: no such file or directory
I0510 19:09:07.374629  395980 retry.go:31] will retry after 230.722µs: open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/scheduled-stop-549555/pid: no such file or directory
I0510 19:09:07.375786  395980 retry.go:31] will retry after 171.688µs: open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/scheduled-stop-549555/pid: no such file or directory
I0510 19:09:07.376955  395980 retry.go:31] will retry after 440.47µs: open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/scheduled-stop-549555/pid: no such file or directory
I0510 19:09:07.378117  395980 retry.go:31] will retry after 953.358µs: open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/scheduled-stop-549555/pid: no such file or directory
I0510 19:09:07.379266  395980 retry.go:31] will retry after 1.564021ms: open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/scheduled-stop-549555/pid: no such file or directory
I0510 19:09:07.381511  395980 retry.go:31] will retry after 2.249254ms: open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/scheduled-stop-549555/pid: no such file or directory
I0510 19:09:07.384803  395980 retry.go:31] will retry after 2.113619ms: open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/scheduled-stop-549555/pid: no such file or directory
I0510 19:09:07.388052  395980 retry.go:31] will retry after 2.020677ms: open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/scheduled-stop-549555/pid: no such file or directory
I0510 19:09:07.390219  395980 retry.go:31] will retry after 6.700608ms: open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/scheduled-stop-549555/pid: no such file or directory
I0510 19:09:07.397482  395980 retry.go:31] will retry after 11.501101ms: open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/scheduled-stop-549555/pid: no such file or directory
I0510 19:09:07.409786  395980 retry.go:31] will retry after 12.462082ms: open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/scheduled-stop-549555/pid: no such file or directory
I0510 19:09:07.423122  395980 retry.go:31] will retry after 12.617067ms: open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/scheduled-stop-549555/pid: no such file or directory
I0510 19:09:07.436388  395980 retry.go:31] will retry after 21.753379ms: open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/scheduled-stop-549555/pid: no such file or directory
I0510 19:09:07.458694  395980 retry.go:31] will retry after 40.336451ms: open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/scheduled-stop-549555/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-549555 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-549555 -n scheduled-stop-549555
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-549555
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-549555 --schedule 15s
E0510 19:09:37.811733  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-549555
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-549555: exit status 7 (77.649688ms)

                                                
                                                
-- stdout --
	scheduled-stop-549555
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-549555 -n scheduled-stop-549555
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-549555 -n scheduled-stop-549555: exit status 7 (68.598629ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-549555" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-549555
--- PASS: TestScheduledStopUnix (115.54s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (200.46s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.176224885 start -p running-upgrade-085041 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.176224885 start -p running-upgrade-085041 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m42.727395586s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-085041 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-085041 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m35.606997156s)
helpers_test.go:175: Cleaning up "running-upgrade-085041" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-085041
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-085041: (1.694468391s)
--- PASS: TestRunningBinaryUpgrade (200.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-065180 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-065180 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (88.622394ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-065180] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20720
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20720-388787/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-388787/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (75.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-065180 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-065180 --driver=kvm2  --container-runtime=crio: (1m15.116703274s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-065180 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (75.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-380533 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-380533 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (120.937748ms)

                                                
                                                
-- stdout --
	* [false-380533] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20720
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20720-388787/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-388787/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0510 19:10:22.427840  434198 out.go:345] Setting OutFile to fd 1 ...
	I0510 19:10:22.427972  434198 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 19:10:22.427988  434198 out.go:358] Setting ErrFile to fd 2...
	I0510 19:10:22.427995  434198 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 19:10:22.428210  434198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-388787/.minikube/bin
	I0510 19:10:22.428860  434198 out.go:352] Setting JSON to false
	I0510 19:10:22.429886  434198 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":31970,"bootTime":1746872252,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 19:10:22.429994  434198 start.go:140] virtualization: kvm guest
	I0510 19:10:22.431783  434198 out.go:177] * [false-380533] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 19:10:22.433259  434198 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 19:10:22.433252  434198 notify.go:220] Checking for updates...
	I0510 19:10:22.434566  434198 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 19:10:22.435862  434198 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-388787/kubeconfig
	I0510 19:10:22.437047  434198 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-388787/.minikube
	I0510 19:10:22.438334  434198 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 19:10:22.439964  434198 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 19:10:22.441644  434198 config.go:182] Loaded profile config "NoKubernetes-065180": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 19:10:22.441768  434198 config.go:182] Loaded profile config "offline-crio-031624": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 19:10:22.441878  434198 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 19:10:22.479744  434198 out.go:177] * Using the kvm2 driver based on user configuration
	I0510 19:10:22.481067  434198 start.go:304] selected driver: kvm2
	I0510 19:10:22.481088  434198 start.go:908] validating driver "kvm2" against <nil>
	I0510 19:10:22.481105  434198 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 19:10:22.483333  434198 out.go:201] 
	W0510 19:10:22.484758  434198 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0510 19:10:22.485950  434198 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-380533 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-380533

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-380533

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-380533

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-380533

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-380533

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-380533

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-380533

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-380533

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-380533

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-380533

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380533"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380533"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380533"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-380533

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380533"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380533"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-380533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-380533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-380533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-380533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-380533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-380533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-380533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-380533" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380533"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380533"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380533"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380533"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380533"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-380533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-380533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-380533" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380533"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380533"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380533"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380533"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380533"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-380533

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380533"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380533"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380533"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380533"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380533"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380533"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380533"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380533"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380533"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380533"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380533"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380533"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380533"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380533"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380533"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380533"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380533"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380533"

                                                
                                                
----------------------- debugLogs end: false-380533 [took: 3.863122099s] --------------------------------
helpers_test.go:175: Cleaning up "false-380533" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-380533
--- PASS: TestNetworkPlugins/group/false (4.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (39.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-065180 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-065180 --no-kubernetes --driver=kvm2  --container-runtime=crio: (37.631485603s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-065180 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-065180 status -o json: exit status 2 (274.858439ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-065180","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-065180
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-065180: (1.156899645s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (39.06s)

                                                
                                    
x
+
TestPause/serial/Start (105.4s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-317241 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-317241 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m45.398833873s)
--- PASS: TestPause/serial/Start (105.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (51.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-065180 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-065180 --no-kubernetes --driver=kvm2  --container-runtime=crio: (51.483445427s)
--- PASS: TestNoKubernetes/serial/Start (51.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-065180 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-065180 "sudo systemctl is-active --quiet service kubelet": exit status 1 (206.370473ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (15.211208323s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
E0510 19:13:31.561120  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (15.844246846s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-065180
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-065180: (1.472256319s)
--- PASS: TestNoKubernetes/serial/Stop (1.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (22.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-065180 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-065180 --driver=kvm2  --container-runtime=crio: (22.499069332s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (22.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-065180 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-065180 "sudo systemctl is-active --quiet service kubelet": exit status 1 (217.302534ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.38s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (131.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2552562420 start -p stopped-upgrade-181866 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2552562420 start -p stopped-upgrade-181866 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m5.238543121s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2552562420 -p stopped-upgrade-181866 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2552562420 -p stopped-upgrade-181866 stop: (12.194055806s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-181866 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-181866 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (53.891132941s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (131.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (113.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-380533 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-380533 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m53.738879002s)
--- PASS: TestNetworkPlugins/group/auto/Start (113.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-181866
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-181866: (1.04130334s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (76.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-380533 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-380533 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m16.330587377s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (76.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (75.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-380533 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-380533 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m15.467854447s)
--- PASS: TestNetworkPlugins/group/calico/Start (75.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-380533 "pgrep -a kubelet"
I0510 19:18:26.119882  395980 config.go:182] Loaded profile config "auto-380533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-380533 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-72l8d" [1aefa97d-4132-4cbe-9d3b-3872860907d7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-72l8d" [1aefa97d-4132-4cbe-9d3b-3872860907d7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003808347s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-6pdmw" [87031da3-9b5d-412e-a538-a35bfe62740e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004737657s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-380533 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-380533 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-380533 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-380533 "pgrep -a kubelet"
I0510 19:18:43.488870  395980 config.go:182] Loaded profile config "kindnet-380533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-380533 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context kindnet-380533 replace --force -f testdata/netcat-deployment.yaml: (1.102068572s)
I0510 19:18:44.594437  395980 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-7wqz8" [98e2ef18-2b1c-4e07-8424-5adc811e9157] Pending
helpers_test.go:344: "netcat-5d86dc444-7wqz8" [98e2ef18-2b1c-4e07-8424-5adc811e9157] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0510 19:18:48.488991  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-7wqz8" [98e2ef18-2b1c-4e07-8424-5adc811e9157] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.005009745s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (84.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-380533 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-380533 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m24.779607404s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (84.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-380533 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-380533 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-380533 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (110.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-380533 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-380533 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m50.638222161s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (110.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (115.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-380533 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-380533 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m55.439898444s)
--- PASS: TestNetworkPlugins/group/flannel/Start (115.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-km9qs" [de4c9e51-0b0a-424d-98ad-4c4c1f02d190] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005073891s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-380533 "pgrep -a kubelet"
I0510 19:19:31.634635  395980 config.go:182] Loaded profile config "calico-380533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-380533 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-x7lxv" [19b3c3cd-c043-49ff-bb72-9dcf087fc8d5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0510 19:19:37.810362  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-x7lxv" [19b3c3cd-c043-49ff-bb72-9dcf087fc8d5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004367925s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-380533 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-380533 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-380533 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (90.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-380533 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-380533 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m30.796901201s)
--- PASS: TestNetworkPlugins/group/bridge/Start (90.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-380533 "pgrep -a kubelet"
I0510 19:20:20.138587  395980 config.go:182] Loaded profile config "custom-flannel-380533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-380533 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-2skx7" [04be87e3-ff4a-4ae7-8439-2c729ec17038] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-2skx7" [04be87e3-ff4a-4ae7-8439-2c729ec17038] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004469339s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-380533 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-380533 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-380533 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-380533 "pgrep -a kubelet"
I0510 19:21:01.466636  395980 config.go:182] Loaded profile config "enable-default-cni-380533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-380533 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-r796d" [efca0b8b-d8f4-43a6-8dbf-42d6fcf5f2bf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-r796d" [efca0b8b-d8f4-43a6-8dbf-42d6fcf5f2bf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003576659s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-j5rlr" [c1540d02-8093-40f5-a420-81845631347c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00423473s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-380533 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-380533 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-380533 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-380533 "pgrep -a kubelet"
I0510 19:21:17.029705  395980 config.go:182] Loaded profile config "flannel-380533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-380533 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-4hq8l" [31fdf886-e154-4e44-888d-cc6f972ea117] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-4hq8l" [31fdf886-e154-4e44-888d-cc6f972ea117] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.005235831s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-380533 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-380533 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-380533 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (111.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-433152 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.33.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-433152 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.33.0: (1m51.032879814s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (111.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-380533 "pgrep -a kubelet"
I0510 19:21:34.129639  395980 config.go:182] Loaded profile config "bridge-380533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-380533 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-jmts5" [922522e6-2092-48e6-850a-5bc2978c545b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-jmts5" [922522e6-2092-48e6-850a-5bc2978c545b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.005742687s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-380533 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-380533 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-380533 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (103.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-544623 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.33.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-544623 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.33.0: (1m43.943189237s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (103.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (78.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-298069 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.33.0
E0510 19:22:40.896580  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-298069 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.33.0: (1m18.664545298s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (78.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-298069 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-298069 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.13911151s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-298069 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-298069 --alsologtostderr -v=3: (10.5835954s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-433152 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a1fe4934-b10d-415a-ac62-8d8f3d63fb5b] Pending
helpers_test.go:344: "busybox" [a1fe4934-b10d-415a-ac62-8d8f3d63fb5b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0510 19:23:26.360302  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:23:26.366734  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:23:26.378188  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:23:26.399610  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:23:26.441108  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:23:26.522676  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:23:26.684459  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:23:27.006243  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [a1fe4934-b10d-415a-ac62-8d8f3d63fb5b] Running
E0510 19:23:27.648355  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:23:28.930305  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.003857935s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-433152 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-544623 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e051b9c6-a887-4a8c-afd4-e5cba260a5e0] Pending
helpers_test.go:344: "busybox" [e051b9c6-a887-4a8c-afd4-e5cba260a5e0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e051b9c6-a887-4a8c-afd4-e5cba260a5e0] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.005018079s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-544623 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-298069 -n newest-cni-298069
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-298069 -n newest-cni-298069: exit status 7 (77.311901ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-298069 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (38.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-298069 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.33.0
E0510 19:23:31.492363  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-298069 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.33.0: (38.160592827s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-298069 -n newest-cni-298069
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (38.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-433152 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-433152 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.128376343s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-433152 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-433152 --alsologtostderr -v=3
E0510 19:23:36.614432  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:23:37.127428  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kindnet-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:23:37.133880  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kindnet-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:23:37.145350  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kindnet-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:23:37.166870  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kindnet-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:23:37.208368  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kindnet-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:23:37.289954  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kindnet-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:23:37.451606  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kindnet-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:23:37.772986  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kindnet-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:23:38.414341  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kindnet-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:23:39.695763  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kindnet-380533/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-433152 --alsologtostderr -v=3: (1m31.056803776s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-544623 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-544623 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.161604757s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-544623 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-544623 --alsologtostderr -v=3
E0510 19:23:42.258432  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kindnet-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:23:46.856407  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:23:47.380305  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kindnet-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:23:48.489929  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/functional-581506/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:23:57.622562  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kindnet-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:24:07.338738  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-544623 --alsologtostderr -v=3: (1m31.122360344s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-298069 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-298069 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-298069 -n newest-cni-298069
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-298069 -n newest-cni-298069: exit status 2 (255.989742ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-298069 -n newest-cni-298069
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-298069 -n newest-cni-298069: exit status 2 (258.007268ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-298069 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-298069 -n newest-cni-298069
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-298069 -n newest-cni-298069
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (88.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-483140 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.33.0
E0510 19:24:18.104129  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kindnet-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:24:25.377743  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/calico-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:24:25.384243  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/calico-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:24:25.395709  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/calico-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:24:25.417138  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/calico-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:24:25.459241  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/calico-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:24:25.540774  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/calico-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:24:25.702510  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/calico-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:24:26.024292  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/calico-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:24:26.666090  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/calico-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:24:27.947544  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/calico-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:24:30.509700  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/calico-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:24:35.631890  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/calico-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:24:37.810371  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/addons-573653/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:24:45.874058  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/calico-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:24:48.300838  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:24:59.066092  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kindnet-380533/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-483140 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.33.0: (1m28.25338941s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (88.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-433152 -n no-preload-433152
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-433152 -n no-preload-433152: exit status 7 (78.53379ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-433152 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (64.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-433152 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.33.0
E0510 19:25:06.355455  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/calico-380533/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-433152 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.33.0: (1m4.627773503s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-433152 -n no-preload-433152
E0510 19:26:10.805151  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/flannel-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:26:10.811585  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/flannel-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:26:10.823101  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/flannel-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:26:10.844637  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/flannel-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:26:10.886143  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/flannel-380533/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (64.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-544623 -n default-k8s-diff-port-544623
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-544623 -n default-k8s-diff-port-544623: exit status 7 (75.398659ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-544623 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (63.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-544623 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.33.0
E0510 19:25:20.402837  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/custom-flannel-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:25:20.409279  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/custom-flannel-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:25:20.420827  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/custom-flannel-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:25:20.442366  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/custom-flannel-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:25:20.483922  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/custom-flannel-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:25:20.565461  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/custom-flannel-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:25:20.727171  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/custom-flannel-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:25:21.049140  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/custom-flannel-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:25:21.690552  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/custom-flannel-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:25:22.972536  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/custom-flannel-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:25:25.534557  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/custom-flannel-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:25:30.656628  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/custom-flannel-380533/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-544623 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.33.0: (1m3.474852284s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-544623 -n default-k8s-diff-port-544623
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (63.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-483140 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [30a163a4-c302-4b01-8e98-78eb100511d9] Pending
helpers_test.go:344: "busybox" [30a163a4-c302-4b01-8e98-78eb100511d9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [30a163a4-c302-4b01-8e98-78eb100511d9] Running
E0510 19:25:47.317687  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/calico-380533/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004401973s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-483140 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-483140 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-483140 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.332066406s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-483140 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-483140 --alsologtostderr -v=3
E0510 19:26:01.381228  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/custom-flannel-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:26:01.750079  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/enable-default-cni-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:26:01.756686  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/enable-default-cni-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:26:01.768270  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/enable-default-cni-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:26:01.789833  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/enable-default-cni-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:26:01.831405  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/enable-default-cni-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:26:01.913019  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/enable-default-cni-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:26:02.074978  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/enable-default-cni-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:26:02.397357  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/enable-default-cni-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:26:03.039079  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/enable-default-cni-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:26:04.321414  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/enable-default-cni-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:26:06.883017  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/enable-default-cni-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:26:10.222885  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-483140 --alsologtostderr -v=3: (1m31.045623309s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0510 19:26:10.969525  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/flannel-380533/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-psc84" [670f3f6b-958e-4269-acd4-9eb46577351b] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0510 19:26:11.131466  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/flannel-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:26:11.453805  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/flannel-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:26:12.005096  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/enable-default-cni-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:26:12.095732  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/flannel-380533/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-psc84" [670f3f6b-958e-4269-acd4-9eb46577351b] Running
E0510 19:26:13.377779  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/flannel-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:26:15.939177  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/flannel-380533/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.004761405s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-f2bqw" [7e6b4921-aafb-4d74-8fbe-cfd721a631e1] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-f2bqw" [7e6b4921-aafb-4d74-8fbe-cfd721a631e1] Running
E0510 19:26:20.988446  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/kindnet-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:26:21.061314  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/flannel-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:26:22.247293  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/enable-default-cni-380533/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003635412s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-psc84" [670f3f6b-958e-4269-acd4-9eb46577351b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004320559s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-433152 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-433152 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-f2bqw" [7e6b4921-aafb-4d74-8fbe-cfd721a631e1] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004915294s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-544623 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-433152 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-433152 -n no-preload-433152
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-433152 -n no-preload-433152: exit status 2 (269.566305ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-433152 -n no-preload-433152
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-433152 -n no-preload-433152: exit status 2 (259.497302ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-433152 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-433152 -n no-preload-433152
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-433152 -n no-preload-433152
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-544623 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-544623 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-544623 -n default-k8s-diff-port-544623
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-544623 -n default-k8s-diff-port-544623: exit status 2 (256.168278ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-544623 -n default-k8s-diff-port-544623
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-544623 -n default-k8s-diff-port-544623: exit status 2 (269.188104ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-544623 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-544623 -n default-k8s-diff-port-544623
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-544623 -n default-k8s-diff-port-544623
E0510 19:26:31.303373  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/flannel-380533/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (5.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-089147 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-089147 --alsologtostderr -v=3: (5.31346185s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (5.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-089147 -n old-k8s-version-089147
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-089147 -n old-k8s-version-089147: exit status 7 (69.532726ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-089147 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-483140 -n embed-certs-483140
E0510 19:27:23.690761  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/enable-default-cni-380533/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-483140 -n embed-certs-483140: exit status 7 (69.06408ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-483140 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (55.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-483140 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.33.0
E0510 19:27:32.747503  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/flannel-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:27:56.344549  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/bridge-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:28:04.265371  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/custom-flannel-380533/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-483140 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.33.0: (54.988031214s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-483140 -n embed-certs-483140
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (55.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-lrxfx" [e6cc4baa-94ea-48e0-862e-8a3ccdc547da] Running
E0510 19:28:22.328544  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/no-preload-433152/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:28:22.334984  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/no-preload-433152/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:28:22.346405  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/no-preload-433152/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:28:22.367905  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/no-preload-433152/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:28:22.409414  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/no-preload-433152/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:28:22.490997  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/no-preload-433152/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:28:22.652361  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/no-preload-433152/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:28:22.974180  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/no-preload-433152/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:28:23.615797  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/no-preload-433152/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:28:24.897169  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/no-preload-433152/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0059671s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-lrxfx" [e6cc4baa-94ea-48e0-862e-8a3ccdc547da] Running
E0510 19:28:26.359587  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/auto-380533/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:28:27.458576  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/no-preload-433152/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004511707s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-483140 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-483140 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-483140 --alsologtostderr -v=1
E0510 19:28:30.657053  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/default-k8s-diff-port-544623/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:28:30.663564  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/default-k8s-diff-port-544623/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:28:30.675031  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/default-k8s-diff-port-544623/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:28:30.696821  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/default-k8s-diff-port-544623/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:28:30.738363  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/default-k8s-diff-port-544623/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:28:30.820223  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/default-k8s-diff-port-544623/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:28:30.982168  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/default-k8s-diff-port-544623/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:28:31.304462  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/default-k8s-diff-port-544623/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-483140 -n embed-certs-483140
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-483140 -n embed-certs-483140: exit status 2 (261.321031ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-483140 -n embed-certs-483140
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-483140 -n embed-certs-483140: exit status 2 (264.339767ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-483140 --alsologtostderr -v=1
E0510 19:28:31.946485  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/default-k8s-diff-port-544623/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:28:32.580171  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/no-preload-433152/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-483140 -n embed-certs-483140
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-483140 -n embed-certs-483140
E0510 19:28:33.228194  395980 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-388787/.minikube/profiles/default-k8s-diff-port-544623/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.86s)

                                                
                                    

Test skip (40/321)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.33.0/cached-images 0
15 TestDownloadOnly/v1.33.0/binaries 0
16 TestDownloadOnly/v1.33.0/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.34
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
129 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
130 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
131 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
132 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
133 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
134 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
135 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
136 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
206 TestKicCustomNetwork 0
207 TestKicExistingNetwork 0
208 TestKicCustomSubnet 0
209 TestKicStaticIP 0
241 TestChangeNoneUser 0
244 TestScheduledStopWindows 0
246 TestSkaffold 0
248 TestInsufficientStorage 0
252 TestMissingContainerUpgrade 0
257 TestNetworkPlugins/group/kubenet 3.59
266 TestNetworkPlugins/group/cilium 3.46
273 TestStartStop/group/disable-driver-mounts 0.15
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.33.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.33.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.33.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.34s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-573653 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.34s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:631: 
----------------------- debugLogs start: kubenet-380533 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-380533

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-380533

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-380533

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-380533

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-380533

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-380533

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-380533

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-380533

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-380533

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-380533

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380533"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380533"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380533"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-380533

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380533"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380533"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-380533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-380533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-380533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-380533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-380533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-380533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-380533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-380533" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380533"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380533"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380533"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380533"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380533"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-380533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-380533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-380533" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380533"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380533"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380533"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380533"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380533"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-380533

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380533"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380533"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380533"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380533"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380533"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380533"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380533"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380533"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380533"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380533"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380533"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380533"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380533"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380533"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380533"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380533"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380533"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380533"

                                                
                                                
----------------------- debugLogs end: kubenet-380533 [took: 3.429012342s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-380533" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-380533
--- SKIP: TestNetworkPlugins/group/kubenet (3.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:631: 
----------------------- debugLogs start: cilium-380533 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-380533

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-380533

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-380533

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-380533

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-380533

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-380533

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-380533

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-380533

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-380533

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-380533

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380533"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380533"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380533"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-380533

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380533"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380533"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-380533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-380533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-380533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-380533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-380533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-380533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-380533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-380533" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380533"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380533"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380533"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380533"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380533"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-380533

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-380533

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-380533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-380533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-380533

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-380533

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-380533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-380533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-380533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-380533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-380533" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380533"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380533"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380533"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380533"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380533"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-380533

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380533"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380533"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380533"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380533"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380533"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380533"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380533"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380533"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380533"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380533"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380533"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380533"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380533"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380533"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380533"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380533"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380533"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-380533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380533"

                                                
                                                
----------------------- debugLogs end: cilium-380533 [took: 3.306437446s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-380533" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-380533
--- SKIP: TestNetworkPlugins/group/cilium (3.46s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-947387" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-947387
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard